WO2017045650A1 - 一种图片处理方法及终端 - Google Patents

一种图片处理方法及终端 Download PDF

Info

Publication number
WO2017045650A1
WO2017045650A1 PCT/CN2016/099400 CN2016099400W WO2017045650A1 WO 2017045650 A1 WO2017045650 A1 WO 2017045650A1 CN 2016099400 W CN2016099400 W CN 2016099400W WO 2017045650 A1 WO2017045650 A1 WO 2017045650A1
Authority
WO
WIPO (PCT)
Prior art keywords
picture
edge
depth
information
color
Prior art date
Application number
PCT/CN2016/099400
Other languages
English (en)
French (fr)
Inventor
戴向东
黄德文
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017045650A1 publication Critical patent/WO2017045650A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules

Definitions

  • the present invention relates to image processing technologies, and in particular, to a picture processing method and a terminal.
  • Mobile phone shooting is one of the basic functions of mobile phones. Users often use mobile phones to take pictures, make some edits to the pictures they take, or use the pictures to make holiday cards or postcards. However, simple pictures are too monotonous.
  • the embodiments of the present invention are expected to provide a picture processing method and a terminal, which can make a picture achieve a stereoscopic card effect of cartoon stroke.
  • a terminal comprising:
  • a dual camera configured to take a picture and obtain first depth information and first color information of the picture after shooting
  • a processor configured to divide the picture into a background area and a foreground area according to the first depth information obtained by the dual camera; extract second depth information and second color information of an edge of the foreground area, and according to the The second depth information and the second color information of the edge of the foreground region are subjected to a stroke processing on the edge of the foreground region according to a proportional relationship between the depth value of the second depth information and the color depth at the stroke.
  • the dual camera is further configured to capture two images corresponding to the same scene
  • the processor is further configured to correct the two images by using a stereo correction algorithm to obtain two a corrected image; a stereo disparity algorithm is used to obtain a disparity map between the two corrected images; and a depth value and color information of each pixel in the scene image are calculated based on the disparity map.
  • the processor is further configured to extract a depth value and color information of a pixel on an outer contour edge of the foreground region.
  • the processor is further configured to divide the picture into a plurality of depth regions by using a mean shift Mean Shift algorithm according to the first depth information of the picture, and determine a background area and a foreground area of the picture.
  • the processor is further configured to: after the picture is divided into a background area and a foreground area, the background area is blurred.
  • the processor is further configured to: after dividing the picture into a background area and a foreground area, use an image sharpening algorithm to strengthen an edge of the foreground area.
  • the processor is further configured to perform the stroke according to the depth value of the second depth information and the color depth of the stroke when performing the stroke processing.
  • the processor is further configured to perform the stroke according to the second color information of the edge of the foreground area when performing the stroke processing.
  • the color at the time of stroke is the same as the current color of the edge of the foreground region.
  • the processor is further configured to perform a stroke processing on an edge of the foreground area
  • Adjusting a focus point of the picture to obtain third depth information of an edge of the foreground area of the focus adjusted picture, according to the third depth information and the second color information of the edge of the foreground area, according to the A depth relationship between the depth value of the third depth information and the color depth of the stroke is performed, and the edge of the foreground region is re-stroked.
  • a picture processing method comprising:
  • the edge of the foreground area is stroked by a proportional relationship of the color depths of the edges.
  • the dividing the picture into a background area and a foreground area according to the first depth information of the picture including:
  • the picture is divided into a plurality of depth regions by using a mean shift Mean Shift algorithm, and a background area and a foreground area of the picture are determined.
  • the taking a picture and obtaining the first depth information and the first color information of the picture after the shooting include:
  • a depth value and color information of each pixel point in the scene image are calculated based on the disparity map.
  • the extracting the second depth information and the second color information of the edge of the foreground area including:
  • a depth value and color information of pixel points on the outer contour side of the foreground area are extracted.
  • the background area is blurred.
  • the method further includes:
  • the edge of the foreground region is enhanced using an image sharpening algorithm.
  • the terminal when the terminal performs the stroke processing, the terminal performs the stroke according to the depth value of the second depth information and the color depth of the stroke.
  • the terminal when the terminal performs the stroke processing, the terminal performs the stroke according to the second color information of the edge of the foreground area.
  • the color at the time of stroke is the same as the current color of the edge of the foreground region.
  • the method further includes:
  • Adjusting a focus point of the picture to obtain third depth information of an edge of the foreground area of the focus adjusted picture, according to the third depth information and the second color information of the edge of the foreground area, according to the A depth relationship between the depth value of the third depth information and the color depth of the stroke is performed, and the edge of the foreground region is re-stroked.
  • the embodiment of the invention provides a picture processing method and a terminal, which can make a picture achieve a stereoscopic card effect of cartoon stroke, take a picture with a dual camera, obtain first depth information and first color information of the picture;
  • the first depth information of the picture obtained by the dual camera divides the picture into a background area and a foreground area; extracting second depth information and second color information of an edge of the foreground area, and according to the foreground And second color information of the edge of the region and the second color information, according to a proportional relationship between the depth value of the second depth information and the color depth of the stroke, performing edge processing on the edge of the foreground region;
  • the effect of the 3D stereo card is to achieve the stereoscopic card effect of the cartoon stroke, which makes the character shooting more interesting; make the photo more fun and add more fun to the user.
  • FIG. 1 is a schematic structural diagram of hardware of a mobile terminal that implements various embodiments of the present invention
  • FIG. 2 is a schematic diagram of a wireless communication system of the mobile terminal shown in FIG. 1;
  • FIG. 3 is a structural block diagram of a terminal according to Embodiment 1 of the present invention.
  • FIG. 4 is a schematic diagram of a terminal with dual cameras according to Embodiment 1 of the present invention.
  • FIG. 5 is a schematic diagram of a user using a terminal to perform shooting according to Embodiment 1 of the present invention.
  • FIG. 6 is a schematic flowchart diagram of a picture processing method according to Embodiment 2 of the present invention.
  • FIG. 7 is a schematic flowchart diagram of a picture processing method according to Embodiment 3 of the present invention.
  • the mobile terminal can be implemented in various forms.
  • the terminals described in the present invention may include, for example, mobile phones, smart phones, notebook computers, digital broadcast receivers, personal digital assistants (PDAs), tablet computers (PADs), portable multimedia players (PMPs), navigation devices, and the like.
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • PDAs personal digital assistants
  • PADs tablet computers
  • PMPs portable multimedia players
  • Mobile terminals and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • FIG. 1 is a schematic diagram showing the hardware structure of a mobile terminal embodying various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an audio/video (A/V) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 1 shows a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented, and more or fewer components may be implemented instead, and the components of the mobile terminal will be described in detail below. .
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • a wireless communication unit can include a wide range At least one of the broadcast receiving module 111, the mobile communication module 112, the wireless internet module 113, the short-range communication module 114, and the location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (Media) such as Multimedia Broadcast-Ground (DMB-T), Digital Multimedia Broadcast-Satellite (DMB-S), Digital Video Broadcast-Handheld (DVB-H) Digital broadcasting systems such as FLO@) data broadcasting systems, terrestrial digital broadcasting integrated services (ISDB-T), etc. receive digital broadcasting.
  • Media such as Multimedia Broadcast-Ground (DMB-T), Digital Multimedia Broadcast-Satellite (DMB-S), Digital Video Broadcast-Handheld (DVB-H) Digital broadcasting systems such as FLO@) data broadcasting systems, terrestrial digital broadcasting integrated services (ISDB-T), etc.
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless internet access technology involved in the module may include none Wireless Local Area Network (WLAN) (Wi-Fi), Wireless Broadband (Wibro), Worldwide Interoperability for Microwave Access (Wimax), High Speed Downlink Packet Access (HSDPA), and more.
  • the short range communication module 114 is a module for supporting short range communication.
  • Some examples of short-range communication technologies include BluetoothTM, Radio Frequency Identification (RFID), Infrared Data Association (IrDA), Ultra Wideband (UWB), ZigbeeTM, and the like.
  • the location information module 115 is a module for checking or acquiring location information of the mobile terminal.
  • a typical example of a location information module is the Global Positioning System (GPS).
  • GPS Global Positioning System
  • the position information module 115 as a GPS calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate the three-dimensional current based on longitude, latitude, and altitude. location information.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the speed information can be calculated by continuously calculating the current position information in real time.
  • the A/V input unit 120 is for receiving an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 122 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 121 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 141 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port for connecting a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 can be configured to receive input from an external device (eg, data information, power, etc.) and transmit the received input to one or more components within the mobile terminal 100 or can be used at the mobile terminal and external device Transfer data between.
  • the interface unit 170 can be used as a The path through which power is supplied from the cradle to the mobile terminal 100 or may be used as a path through which various command signals input from the cradle are transmitted to the mobile terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be used to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 can convert the audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like. Audio signal and output as sound sound. Moreover, the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like that performs processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, and the like) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like.
  • the controller 180 may include a multimedia module 181 for reproducing (or playing back) multimedia data, a multimedia module 181 may be constructed within controller 180 or may be configured to be separate from controller 180.
  • the controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 1 may be configured to operate using a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • a communication system such as a wired and wireless communication system and a satellite-based communication system that transmits data via frames or packets.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM) and more.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • a CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 2 can include multiple BSCs 275.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas for diversity reception. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a plurality of satellites 300 are depicted, but it is understood that useful positioning information can be obtained using any number of satellites.
  • the position information module 115 as a GPS as shown in FIG. 1 is generally configured to cooperate with the satellite 300 to obtain desired positioning information.
  • Alternative GPS tracking technology Or other techniques that can track the location of the mobile terminal can be used in addition to the GPS tracking technology.
  • at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the embodiment of the present invention provides a terminal.
  • the terminal includes: a dual camera 301 and a processor 302;
  • the dual camera 301 is configured to take a picture and obtain first depth information and first color information of the picture after shooting.
  • the terminal may be a mobile phone, and the dual camera 301 may be located on the back of the terminal as shown in FIG. 4, which is like a human eye and is distributed on the same level.
  • the user needs to open the shooting application on the terminal first.
  • the display on the terminal displays the screen facing the dual camera of the terminal, and the user can adjust the shooting angle to make the terminal
  • the display shows the scene the user is about to shoot.
  • the user can take a picture by touching the shooting button on the terminal.
  • the dual camera 301 utilizes the principle of bionics to obtain synchronized exposure through the calibrated dual camera.
  • the light image because there is a certain distance between the two cameras of the dual camera 301, the image formed by the two scenes by the same scene has a certain difference, which is parallax, because the parallax information can be calculated by the parallax.
  • the depth information of the scene because there is a certain distance between the two cameras of the dual camera 301, the image formed by the two scenes by the same scene has a certain difference, which is parallax, because the parallax information can be calculated by the parallax.
  • the depth information of the scene because there is a certain distance between the two cameras of the dual camera 301, the image formed by the two scenes by the same scene has a certain difference, which is parallax, because the parallax information can be calculated by the parallax.
  • the terminal can respectively obtain two images corresponding to the same scene by two cameras on the dual camera 301, and correct the two images by using a stereo correction algorithm to obtain two corrected images;
  • the stereo matching algorithm obtains the disparity map D between the two corrected images; from the disparity d of any pixel point in D, the depth value Z of each pixel in the shooting framing picture is calculated using the following formula:
  • T is the spacing between the two cameras.
  • the first depth information of the picture can be obtained according to the above method, that is, the depth value of each pixel on the picture is obtained; and the color information of the picture, that is, each pixel, can also be obtained.
  • the color information of the point can be represented by RGB values.
  • the processor 302 is configured to divide the picture into a background area and a foreground area according to the first depth information of the picture obtained by the dual camera 301; extract second depth information and a second edge of the edge of the foreground area Color information, and according to the second depth information of the edge of the foreground area and the second color information, according to the proportional relationship between the depth value of the second depth information and the color depth of the stroke, the edge of the foreground area is performed Stroke processing.
  • the processor 302 may divide the picture into a background area and a foreground area according to the first depth information of the picture.
  • the picture taken by the terminal is a landscape view of the character
  • the processor 302 may divide the picture into a background area, that is, a landscape image area according to the depth information of the picture, and the foreground area is a person. Image area.
  • the processor 302 is further configured to divide the picture into a plurality of depth regions by using a mean shift Mean Shift algorithm according to the first depth information of the picture, and determine a background area of the picture and Prospect area.
  • the Mean Shift algorithm is an effective statistical iterative algorithm first proposed by Fukuna-ga in 1975. Until 1995, Cheng improved the kernel function and weight function in the Mean Shift algorithm to expand the scope of the algorithm.
  • Image segmentation based on Mean Shift algorithm is a region-based segmentation method. This segmentation method is very similar to the human eye's analysis characteristics, and has strong adaptability and robustness. It is not sensitive to the smooth area of the image and the image texture area, so it can get good segmentation results. This algorithm has been widely used in the field of computer vision and has achieved great success.
  • the shooting scene is a person shooting.
  • the area where the character is located that is, the area within the dotted line is closer to the camera, and the depth value is larger.
  • the foreground area may be divided into the foreground area and the other area distance.
  • the camera is far away and has a small depth value, and can be divided into a background area according to the first depth information.
  • the processor 302 After determining the background area and the foreground area of the image, the processor 302 needs to extract the second depth information and the second color information of the edge of the foreground area, and then perform the stroke processing on the edge of the foreground area.
  • the shooting scene is a person shooting, as shown in FIG. 5, the area in which the character is located, that is, the area inside the dotted line is the foreground area, and the other area is the background area, and the edge of the foreground area described here is the outer contour of the foreground area where the character is located.
  • the line that is, the broken line shown in Fig. 5, performs the stroke processing on the edge of the foreground area, that is, the stroke processing on the dotted line shown in Fig. 5.
  • the processor 302 can also extract edge information of the foreground region, and strengthen an edge of the foreground region by using an image sharpening algorithm.
  • the image sharpening algorithm is a filtering algorithm that enhances the edges of the image, which can enhance the edge contrast of the image.
  • the image sharpening algorithm is used to enhance and expand the edge line of the foreground area, so that the edge of the foreground area is significantly conspicuous, and the edge processing of the edge of the foreground area is facilitated.
  • the processor 302 provided in this embodiment may perform the blurring of the background area after dividing the picture into a background area and a foreground area; or strengthen an edge of the foreground area by using an image sharpening algorithm, or The background area is blurred and the edge of the foreground area is enhanced by an image sharpening algorithm; thus, the edge of the foreground area can be made significant, and the edge of the foreground area can be subsequently processed.
  • the processor 302 performs the process of the stroke processing, and first needs to extract the second depth information and the second color information of the edge of the foreground area, where the second depth information and the second color information of the edge of the foreground area are The depth value and color information of the pixel on the outer contour edge of the foreground area.
  • the processor 302 extracts the second depth information and the second color information of the edge of the foreground area, the proportional relationship between the depth value of the second depth information and the color depth of the stroke, that is, the second depth information of the edge.
  • the proportional relationship between the depth value of the second depth information and the color depth of the stroke that is, the second depth information of the edge.
  • the second edge according to the edge of the foreground area is also needed.
  • the color information that is, the color at the time of stroke, needs to be the same as the current color of the edge of the foreground area, and only differs in color depth according to its depth information.
  • the stroke edge disappears with the transparency of the depth information transparency, and the closer the stroke edge to the position of the dual camera is, the more the stroke edge away from the position of the dual camera gradually disappears, and the effect similar to the 3D stereo card can be achieved.
  • the processor 302 is further configured to: after performing the stroke processing on the edge of the foreground area, adjust a focus point of the picture to obtain third depth information of an edge of the foreground area of the focus adjusted picture And re-orienting the edge of the foreground area according to the third depth information and the second color information of the edge of the foreground area according to a proportional relationship between the depth value of the third depth information and the color depth of the stroke Stroke processing.
  • the focus point of the picture may be continuously adjusted, and then the third depth information of the foreground area of the picture after the focus adjustment is obtained, and the edge may be at the edge when the stroke is performed.
  • the shooting scene is a person shooting
  • the shooting is a low shot
  • the focus is on the head of the character
  • the head of the character is closest to the lens
  • the stroked edge is thicker and more obvious.
  • the transparency gradually disappears; if the user is dissatisfied, the focus is changed to the person's foot, so that when the stroke is drawn, the person's foot is closest to the lens, and the stroked edge is thicker and more obvious.
  • the stroke begins with a gradient of gradual disappearance.
  • the user can input different focus points, and the processor of the terminal can obtain different edge depth information according to different focus points, and then according to the proportional relationship between the depth value of the edge depth information and the color depth of the stroke. Different strokes can be used to obtain images with different effects. The user can continually adjust the input focus point to finally get a picture that meets the user's desired effect.
  • the processor of the terminal can perform the stroke processing on the edge of the foreground area where the character is located in the picture taken by the dual camera, and the head of the character is closest to the lens when the picture is taken.
  • the edge of the stroke will be thicker and thicker.
  • the stroke of the stroke below the shoulder will gradually disappear.
  • the processor of the terminal can also blur the background area and highlight the characters in the foreground area. The characters will produce effects similar to 3D cards, and the characters will be more vivid.
  • the picture processed by the terminal provided by the embodiment can generate an effect similar to a 3D stereo card, so that the character shooting is more interesting; the user often edits the picture, or uses the photo to make a holiday card or a postcard, and the simple picture seems to be too monotonous.
  • the terminal provided by the embodiment adopts cartoonized stroke processing to process pictures, which can generate a stereoscopic card effect, which makes the photos more fun and adds more fun to the user.
  • An embodiment of the present invention provides a method for processing a picture. As shown in FIG. 6, the process of the method in this embodiment includes the following steps:
  • Step 601 Take a picture, and obtain first depth information and first color information of the picture after shooting.
  • the method in this embodiment is implemented by using a terminal.
  • the terminal may be a mobile phone.
  • the terminal may apply a dual camera to capture a picture.
  • the dual camera may be located on the back of the terminal as shown in FIG.
  • the human eye is distributed at the same level.
  • the user When shooting, the user needs to open the shooting application on the terminal first. At this time, the display on the terminal will display the screen facing the dual camera of the terminal. The user can adjust the shooting angle so that the display of the terminal shows that the user is going to Scenery shot. Then, as shown in FIG. 5, the user can take a picture by touching the shooting button on the terminal.
  • the dual camera uses the principle of bionics to obtain a synchronous exposure image through the calibrated dual camera. Because the two cameras of the dual camera have a certain distance, the same scene passes. The images formed by the two lenses have a certain difference, which is parallax. Because of the existence of parallax information, the depth information of the scene can be calculated from the parallax.
  • the terminal can respectively obtain two images corresponding to the same scene by two cameras on the dual camera, and correct the two images by using a stereo correction algorithm to obtain two corrected images;
  • the matching algorithm obtains the disparity map D between the two corrected images; from the disparity d of any pixel point in D, the depth value Z of each pixel in the photographing framing picture is calculated using the following formula:
  • T is the spacing between the two cameras.
  • the first depth information of the picture may be obtained according to the above method, that is, the depth value of each pixel on the picture is obtained; and the first color information of the picture, that is, each The color information of the pixel.
  • the color information for each pixel can be represented by RGB values.
  • Step 602 Divide the picture into a background area and a foreground area according to the first depth information of the picture.
  • the terminal may divide the picture into a background area and a foreground area according to the depth information of the picture.
  • the picture taken by the terminal is a landscape view of the character
  • the terminal may divide the picture into a background area, that is, a landscape image area according to the depth information of the picture, and the foreground area is a person image area (figure The area within the dotted line shown in 5.).
  • Step 603 Extract the second depth information and the second color information of the edge of the foreground area, and according to the second depth information of the edge of the foreground area and the second color information, according to the depth of the second depth information. a proportional relationship between the value and the color depth at the stroke, for the foreground region Stroke processing at the edge of the field.
  • the terminal may extract the second depth information and the second color information of the edge of the foreground area from the first depth information and the first color information of the picture obtained in step 601; the second depth information of the edge of the foreground area And the second color information, that is, the depth value and color information of the pixel points on the outer contour side of the foreground area.
  • the terminal when performing the stroke processing, performs a proportional relationship between the depth value of the second depth information and the color depth of the stroke, that is, the smaller the depth value of the second depth information is, the lighter the stroke color is;
  • the second color information according to the edge of the foreground area is required, that is, the color of the stroke needs to be the same as the current color of the edge of the foreground area, and only the color according to the depth information thereof. The depth is different.
  • the stroke edge disappears with the transparency of the depth information transparency, and the closer the stroke edge to the position of the dual camera is, the more the stroke edge away from the position of the dual camera gradually disappears, and the effect similar to the 3D stereo card can be achieved.
  • the terminal can perform the stroke processing on the edge of the foreground area where the character is located in the picture taken by the dual camera, and the head of the character is closest to the lens during the overhead shooting.
  • the edge line will be thicker and more obvious, and the stroke gradient below the shoulder will gradually disappear.
  • the terminal can also blur the background area to highlight the characters in the foreground area; thus the characters in the picture will be generated. Similar to the effect of 3D stereo cards, the characters are more vivid.
  • the picture processed by the terminal provided by the embodiment can generate an effect similar to a 3D stereo card, so that the character shooting is more interesting; the user often edits the picture, or uses the photo to make a holiday card or a postcard, and the simple picture seems to be too monotonous.
  • the terminal provided in this embodiment adopts cartoon stroke Processing images can produce stereoscopic card effects, making photos more fun and adding more fun to users.
  • An embodiment of the present invention provides a method for processing a picture. As shown in FIG. 7, the process of the method in this embodiment includes the following steps:
  • Step 701 Take a picture, and obtain first depth information and first color information of the picture after shooting.
  • the method in this embodiment is implemented by using a terminal.
  • the terminal may be a mobile phone.
  • the terminal may apply a dual camera to capture a picture.
  • the dual camera may be located on the back of the terminal as shown in FIG.
  • the human eye is distributed at the same level.
  • the user When shooting, the user needs to open the shooting application on the terminal first. At this time, the display on the terminal will display the screen facing the dual camera of the terminal. The user can adjust the shooting angle so that the display of the terminal shows that the user is going to Scenery shot. Then, as shown in FIG. 5, the user can take a picture by touching the shooting button on the terminal.
  • the dual camera uses the principle of bionics to obtain a synchronous exposure image through the calibrated dual camera. Because the two cameras of the dual camera have a certain distance, the image formed by the two scenes has a certain difference. Parallax, because of the existence of parallax information, the depth information of the scene can be calculated from the parallax.
  • the terminal can respectively obtain two images corresponding to the same scene by two cameras on the dual camera, and correct the two images by using a stereo correction algorithm to obtain two corrected images;
  • the matching algorithm obtains the disparity map D between the two corrected images; from the disparity d of any pixel point in D, the depth value Z of each pixel in the photographing framing picture is calculated using the following formula:
  • f is the distance from the plane of the two camera avatars in the camera to the main plane, that is, the focal length of the aperture imaging model (the same as the f of the two cameras in this embodiment), that is, the imaging of the two cameras
  • T is the spacing between the two cameras.
  • the first depth information of the picture may be obtained according to the above method, that is, the depth value of each pixel on the picture is obtained; and the first color information of the picture, that is, each The color information of the pixel.
  • the color information for each pixel can be represented by RGB values.
  • Step 702 According to the first depth information of the picture, divide the picture into a plurality of depth regions by using a mean shifting Mean Shift algorithm, and determine a background area and a foreground area of the image.
  • the terminal may divide the picture into a background area and a foreground area according to the first depth information of the picture.
  • the shooting scene is a person shooting.
  • the area where the character is located that is, the area within the dotted line is closer to the camera, and the depth value is larger, and can be divided into a foreground area, and other areas are farther from the camera, and the depth value is larger. Small, can be divided into background areas.
  • the terminal may adopt a Mean Shift algorithm, divide the picture into a plurality of depth regions according to depth information of the picture, and then determine a background area and a foreground area of the image.
  • the Mean Shift algorithm is an effective statistical iterative algorithm first proposed by Fukuna-ga in 1975. Until 1995, Cheng improved the kernel function and weight function in the Mean Shift algorithm to expand the scope of the algorithm.
  • Image segmentation based on Mean Shift algorithm is a region-based segmentation method. This segmentation method is very similar to the human eye's analysis characteristics, and has strong adaptability and robustness. It is not sensitive to the smooth area of the image and the image texture area, so it can get good segmentation results. This algorithm has been widely used in the field of computer vision and has achieved great success.
  • the Mean Shift algorithm may be applied to divide the picture into a plurality of depth regions, and then divide a depth region closer to the camera into a foreground region, and divide a depth region farther from the camera into a background region. This will determine the background and foreground areas of the picture.
  • Step 703 Blur the background area.
  • the method of the embodiment can also blur the background and further highlight the target image in the foreground area.
  • the shooting scene is a person shooting
  • the character is a target image
  • the distance is closer to the camera
  • the depth value is larger
  • the object such as the landscape belongs to the background, and is far away from the camera
  • the depth value is small
  • the area where the object such as the scenery is located is also It is the background area that is blurred
  • the area where the character is located, that is, the foreground area remains clear.
  • the background image enhances the blurring effect, and more prominently highlights the characters in the foreground area; at the same time, it also helps to highlight the edge of the foreground area, and lays a foundation for subsequent stroke processing on the edge of the foreground area.
  • Step 704 Enhance an edge of the foreground area by using an image sharpening algorithm.
  • the terminal may further extract edge information of the foreground area, and use an image sharpening algorithm to strengthen an edge of the foreground area.
  • the image sharpening algorithm is a filtering algorithm that enhances the edges of the image, which can enhance the edge contrast of the image.
  • the image sharpening algorithm is used to enhance and expand the edge line of the foreground area, so that the edge of the foreground area is significantly conspicuous, and the edge processing of the edge of the foreground area is facilitated.
  • the shooting scene is a person shooting, as shown in FIG. 5, the area in which the character is located, that is, the area inside the dotted line is the foreground area, and the other area is the background area, and the edge of the foreground area described here is the outer contour of the foreground area where the character is located.
  • the line, the dashed line shown in Fig. 5, strengthens the outer contour line of the foreground area where the edge of the foreground area is located.
  • Steps 703 and 704 in the method of this embodiment are not sequential, and in the method of this embodiment, the terminal may divix the background area after dividing the picture into a background area and a foreground area; or use image sharpening.
  • the algorithm strengthens the edge of the foreground area, or blurs the background area and strengthens the edge of the foreground area by using an image sharpening algorithm; thus, the edge of the foreground area can be made significant, and the subsequent foreground area is facilitated. Edge of the stroke deal with.
  • Step 705 Extract second depth information and second color information of the edge of the foreground area, and according to the second depth information of the edge of the foreground area and the second color information, according to the depth of the second depth information. The value is compared with the color depth of the stroke, and the edge of the foreground area is stroked.
  • the terminal may extract the second depth information and the second color information of the edge of the foreground area from the first depth information and the first color information of the picture obtained in step 701; the second depth information of the edge of the foreground area And the second color information, that is, the depth value and color information of the pixel points on the outer contour side of the foreground area.
  • the depth value of the second depth information is smaller, and the stroke color is shallower; that is, the larger the depth value is, the closer to the dual camera, the darker the color of the stroke; the depth value is Small, the farther away from the dual camera, the lighter the color of the stroke, so that you can get a depth transparency effect along the edge.
  • the second color information according to the edge of the foreground area is required, that is, the color of the stroke needs to be the same as the current color of the edge of the foreground area, and only the color according to the depth information thereof. The depth is different.
  • the stroke edge disappears with the transparency of the depth information transparency, and the closer the stroke edge of the edge of the dual camera position is, the more the stroke edge of the edge away from the position of the dual camera gradually disappears, and the effect similar to the 3D stereo card can be achieved.
  • the terminal can perform the stroke processing on the edge of the foreground area where the character is located in the picture taken by the dual camera, and the head of the character is closest to the lens during the overhead shooting.
  • the edge line will be thicker and more obvious, and the stroke gradient below the shoulder will gradually disappear.
  • the terminal can also blur the background area to highlight the characters in the foreground area; thus the characters in the picture will be generated. Similar to the effect of 3D stereo cards, the characters are more vivid.
  • Step 706 Adjust a focus point of the picture to obtain third depth information of an edge of the foreground area of the focus-adjusted picture, according to the third depth information and second color information of an edge of the foreground area. And performing a stroke processing on the edge of the foreground area according to a proportional relationship between the depth value of the third depth information and the color depth of the stroke.
  • the focus point of the picture may be continuously adjusted, and then the third depth information of the edge of the foreground area of the focus adjusted picture is obtained.
  • the focus point of the picture may be continuously adjusted, and then the third depth information of the edge of the foreground area of the focus adjusted picture is obtained.
  • the shooting scene is a person shooting
  • the shooting is a low shot
  • the focus is on the head of the character
  • the head of the character is closest to the lens
  • the stroked edge is thicker and more obvious.
  • the transparency gradient gradually disappears; if the focus is changed to the person's foot, the stroke of the character is closest to the lens when the stroke is drawn, the stroked edge is thicker and more pronounced, and the stroke above the leg begins to be transparent.
  • the gradient gradually disappears.
  • the user can input different focus points, and the terminal can obtain different depth information according to different focus points, and then according to the depth value of the depth information, the stroke rule and the lighter the stroke color are smaller.
  • the color information of the edge of the foreground area is subjected to different stroke processing, and thus various pictures with different effects can be obtained.
  • the user can continually adjust the input focus point to finally get a picture that meets the user's desired effect.
  • embodiments of the present invention can be provided as a method, system, or computer program product. Accordingly, the present invention can take the form of a hardware embodiment, a software embodiment, or a combination of software and hardware. Moreover, the invention can take the form of a computer program product embodied on one or more computer-usable storage media (including but not limited to disk storage and optical storage, etc.) including computer usable program code.
  • the computer program instructions can also be stored in a computer readable memory that can direct a computer or other programmable data processing device to operate in a particular manner, such that the instructions stored in the computer readable memory produce an article of manufacture comprising the instruction device.
  • the apparatus implements the functions specified in one or more blocks of a flow or a flow and/or block diagram of the flowchart.
  • These computer program instructions can also be loaded onto a computer or other programmable data processing device such that a series of operational steps are performed on a computer or other programmable device to produce computer-implemented processing for execution on a computer or other programmable device.
  • the instructions provide steps for implementing the functions specified in one or more of the flow or in a block or blocks of a flow diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Telephone Function (AREA)

Abstract

本发明实施例公开了一种终端,所述终端包括:双摄像头,配置为拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息;处理器,配置为根据所述双摄像头获得的所述图片的第一深度信息将所述图片分为背景区域和前景区域;提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息中的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。本发明实施例公开了一种图片处理方法。

Description

一种图片处理方法及终端 技术领域
本发明涉及图像处理技术,尤其涉及一种图片处理方法及终端。
背景技术
手机拍摄是目前手机基础功能之一,用户会经常使用手机拍摄获得图片,并对拍摄的图片进行一些编辑,或用拍摄的图片制作成节日卡片或明信片,但是,单纯的图片太过单调。
发明内容
有鉴于此,本发明实施例期望提供一种图片处理方法及终端,可以使图片达到卡通化描边的立体卡片效果。
为达到上述目的,本发明的技术方案是这样实现的:
一种终端,所述终端包括:
双摄像头,配置为拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息;
处理器,配置为根据所述双摄像头获得的第一深度信息将所述图片分为背景区域和前景区域;提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
上述方案中,所述双摄像头,还配置为拍摄获得同一景物对应的两幅图像;
所述处理器,还配置为将所述两幅图像利用立体校正算法校正获得两 幅校正后图像;使用立体匹配算法获得两幅校正后图像之间的视差图;基于所述视差图计算出所述景物画面中各个像素点的深度值以及颜色信息。
上述方案中,所述处理器,还配置为提取出所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
上述方案中,所述处理器,还配置为根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图片的背景区域和前景区域。
上述方案中,所述处理器,还配置为在根据将所述图片分为背景区域和前景区域后,将所述背景区域进行虚化。
上述方案中,所述处理器,还配置为在将所述图片分为背景区域和前景区域后,采用图像锐化算法强化所述前景区域的边缘。
上述方案中,所述处理器,还用于在进行描边处理时,按照所述第二深度信息的深度值与描边时颜色深度成正比关系进行描边。
上述方案中,所述处理器,还配置为在进行描边处理时,依据所述前景区域的边缘的第二颜色信息进行描边。
上述方案中,描边时的颜色与所述前景区域边缘当前的颜色相同。
上述方案中,所述处理器,还配置为在对所述前景区域的边缘进行描边处理之后,
调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
一种图片处理方法,所述方法包括:
拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息;
根据所述图片的第一深度信息将所述图片分为背景区域和前景区域;
提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
上述方案中,所述根据所述图片的第一深度信息将所述图片分为背景区域和前景区域,包括:
根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图片的背景区域和前景区域。
上述方案中,所述拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息,包括:
利用两个摄像头拍摄获得同一景物对应的两幅图像;
将所述两幅图像利用立体校正算法校正获得两幅校正后图像;
使用立体匹配算法获得两幅校正后图像之间的视差图;
基于所述视差图计算出所述景物画面中各个像素点的深度值以及颜色信息。
上述方案中,所述提取出所述前景区域的边缘的第二深度信息和第二颜色信息,包括:
提取出所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
上述方案中,在所述将所述图片分为背景区域和前景区域之后,所述方法还包括:
将所述背景区域进行虚化。
上述方案中,在所述将所述图片分为背景区域和前景区域之后,所述方法还包括:
采用图像锐化算法强化所述前景区域的边缘。
上述方案中,终端在进行描边处理时,按照所述第二深度信息的深度值与描边时颜色深度成正比关系进行描边。
上述方案中,终端在进行描边处理时,依据所述前景区域的边缘的第二颜色信息进行描边。
上述方案中,描边时的颜色与所述前景区域边缘当前的颜色相同。
上述方案中,在对所述前景区域的边缘进行描边处理之后,所述方法还包括:
调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
本发明实施例提供了一种图片处理方法及终端,可以使图片达到卡通化描边的立体卡片效果,用双摄像头拍摄图片,获得所述图片的第一深度信息和第一颜色信息;然后根据所述双摄像头获得的所述图片的第一深度信息将所述图片分为背景区域和前景区域;提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理;这样可以产生类似3D立体卡片的效果,达到卡通化描边的立体卡片效果,使人物拍摄更有趣味;让照片更加好玩,为用户增添更多的乐趣。
附图说明
图1为实现本发明各个实施例的移动终端的硬件结构示意图;
图2为如图1所示的移动终端的无线通信系统示意图;
图3为本发明实施例1提供的一种终端的结构框图;
图4为本发明实施例1提供的一种有双摄像头的终端的示意图;
图5为本发明实施例1提供的一种用户使用终端进行拍摄的示意图;
图6为本发明实施例2提供的一种图片处理方法的流程示意图;
图7为本发明实施例3提供的一种图片处理方法的流程示意图。
具体实施方式
下面将结合本发明实施例中的附图,对本发明实施例中的技术方案进行清楚、完整地描述。
现在将参考附图1来描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以各种形式来实施。例如,本发明中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、个人数字助理(PDA)、平板电脑(PAD)、便携式多媒体播放器(PMP)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为实现本发明各个实施例的移动终端的硬件结构示意。
移动终端100可以包括无线通讯单元110、音频/视频(A/V)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图1示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件,可以替代地实施更多或更少的组件,将在下面详细描述移动终端的元件。
无线通讯单元110通常包括一个或多个组件,其允许移动终端100与无线通讯系统或网络之间的无线电通讯。例如,无线通讯单元可以包括广 播接收模块111、移动通讯模块112、无线互联网模块113、短程通讯模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通讯网络提供,并且在该情况下,广播相关信息可以由移动通讯模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(Media FLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通讯模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括无 线局域网(WLAN)(Wi-Fi)、无线宽带(Wibro)、全球微波互联接入(Wimax)、高速下行链路分组接入(HSDPA)等等。
短程通讯模块114是用于支持短程通讯的模块。短程通讯技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是用于检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是全球定位系统(GPS)。根据当前的技术,作为GPS的位置信息模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,作为GPS位置信息模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120用于接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风122,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通讯单元110进行发送,可以根据移动终端的构造提供两个或更多相机121。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通讯模块112发送到移动通讯基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器141将在下面结合触摸屏来对此进行描述。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、用于连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以用于接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以用于在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允 许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通讯(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可用于检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通讯单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声 音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通讯(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储已经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通讯、视频通话等等相关的控制和处理。另外,控制器180可以包括用于再现(或回放)多媒体数据的多媒体模块181,多媒体模块 181可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,已经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图1中所示的移动终端100可以被构造为利用经由帧或分组发送数据的诸如有线和无线通讯系统以及基于卫星的通讯系统来操作。
现在将参考图2描述其中根据本发明的移动终端能够操作的通讯系统。
这样的通讯系统可以使用不同的空中接口和/或物理层。例如,由通讯系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通讯系统(UMTS)(特别地,长期演进(LTE))、 全球移动通讯系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通讯系统,但是这样的教导同样适用于其它类型的系统。
参考图2,CDMA无线通讯系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干已知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图2中所示的系统可以包括多个BSC275。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由用于分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图2中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图1中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图2中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图2中,描绘了多个卫星300,但是理解的是,可以利用任何数目的卫星获得有用的定位信息。如图1中所示的作为GPS的位置信息模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术 或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通讯系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通讯。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端硬件结构以及通讯系统,提出本发明方法各个实施例。
实施例1
本发明实施例提拱了一种终端,如图3所示,所述终端包括:双摄像头301和处理器302;其中,
所述双摄像头301,配置为拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息。
本实施例中,所述终端可以为手机,所述双摄像头301可以如图4所示,位于终端的背面,仿人眼,在同一水平上左右分布。
在应用本实施例中的终端进行拍摄时,用户需要先打开终端上的拍摄应用,此时终端上的显示器上会显示出该终端的双摄像头对着的画面,用户可以调整拍摄角度,使终端的显示器上显示出用户将要拍摄的景物。然后,如图5所示,用户可以通过触摸终端上的拍摄键,拍摄图片。
双摄像头301是利用仿生学原理,通过标定后的双摄像头得到同步曝 光图像,因为双摄像头301的两个摄像头之间存在一定的距离,所以同一景物通过两个镜头所成的像有一定的差别,既视差,因为视差信息的存在,就可以由视差来计算出景物的深度信息。
示例的,在同一时刻,该终端可以分别通过双摄像头301上的两个摄像头拍摄获得同一个景物对应的两幅图像,将所述两幅图像利用立体校正算法校正获得两幅校正后图像;使用立体匹配算法获得两幅校正后图像之间的视差图D;由D中任意像素点的视差d,使用以下公式计算出所述拍摄取景画面中各个像素点的深度值Z:
Figure PCTCN2016099400-appb-000001
其中,f是该拍摄装置中两个摄像头像平面到主平面的距离,即小孔成像模型的焦距(本实施例中两个摄像头的f一样),即两个摄像头小孔成像模型的焦距,T是两个摄像头之间的间距。
双摄像头301拍摄完图片后,按照上述方法就可以获得所述图片的第一深度信息,即获得所述图片上每个像素点的深度值;同时还可以获得图片的颜色信息,即每个像素点的颜色信息。示例的,每个像素点的颜色信息可以是用RGB值表示。
处理器302,配置为根据所述双摄像头301获得的所述图片的第一深度信息将所述图片分为背景区域和前景区域;提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
所述双摄像头301获得的所述图片的第一深度信息后,处理器302可以根据所述图片的第一深度信息将所述图片分为背景区域和前景区域。
示例的,终端拍摄的图片是人物风景图,处理器302可以根据所述图片的深度信息将所述图片分为背景区域即风景图像区域,前景区域即人物 图像区域。
在本实施例中,所述处理器302,还配置为根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图片的背景区域和前景区域。
Mean Shift算法是一种有效的统计迭代算法,是由Fukuna-ga在1975年首先提出的。直到1995年,Cheng改进了Mean Shift算法中的核函数和权重函数,才扩大了该算法的适用范围。基于Mean Shift算法的图像分割是一种基于区域的分割方法,这种分割方法跟人眼的对图像的分析特性极其相近,并且具有很强的适应性和鲁棒性。它对对图像的平滑区域和图像纹理区域并不敏感,所以能够得到很好的分割结果。此算法已经在计算机视觉领域得到了较为广泛的应用并取得了较大的成功。本实施例可以应用Mean Shift算法,将所述图片分割为若干深度区域,然后将离摄像头较近的深度区域划分为前景区域,将离摄像头较远的深度区域划分为背景区域,这样就可以确定出所述图片的背景区域和前景区域。
示例的,拍摄场景是人物拍摄,如图5所示,人物所在区域即虚线内的区域距离摄像机较近,深度值较大,根据所述第一深度信息可以被划分为前景区域,其他区域距离摄像机较远,深度值较小,根据所述第一深度信息可以被划分为背景区域。
处理器302在确定出所述图像的背景区域和前景区域后,需要先提取出所述前景区域的边缘的第二深度信息和第二颜色信息,然后对前景区域的边缘进行描边处理。
这里假定拍摄场景是人物拍摄,如图5所示,确定人物所在区域即虚线内的区域为前景区域,其他区域为背景区域,这里所述的前景区域的边缘即人物所在的前景区域的外轮廓线,即图5中所示的虚线,对前景区域的边缘进行描边处理即在图5中所示的虚线上进行描边处理。
本实施例可将风景等物体所在区域也即是背景区域进行虚化,人物所在的区域也即前景区域保持清晰。这样,背景图像的加强虚化效果,更突出前景区域的人物;有利于凸显出所述前景区域的边缘。
处理器302还可以提取出所述前景区域的边缘信息,采用图像锐化算法强化所述前景区域的边缘。
图像锐化算法是一种增强图像边缘的滤波算法,可以使图像的边缘对比度加强。本实施例中利用图像锐化的算法,将所述前景区域的边缘线条增强扩充,使得前景区域的边缘显著化,方便后续对该前景区域的边缘进行描边处理。
本实施例提供的处理器302可以在将所述图片分为背景区域和前景区域后,将所述背景区域进行虚化;或者采用图像锐化算法强化所述前景区域的边缘,或者,将所述背景区域进行虚化并采用图像锐化算法强化所述前景区域的边缘;这样都可以使得前景区域的边缘显著化,方便后续对该前景区域的边缘进行描边处理。
处理器302在进行描边处理的过程,需要先提取出所述前景区域的边缘的第二深度信息和第二颜色信息,所述前景区域的边缘的第二深度信息和第二颜色信息即所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
处理器302提取出所述前景区域的边缘的第二深度信息和第二颜色信息后,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,即边缘的第二深度信息的深度值越小描边颜色越浅,进行边缘色彩填充;即深度值越大,越靠近双摄像头,描边的颜色就要越深;深度值越小,越远离双摄像头,描边的颜色就要越浅,这样可以得到一种沿着边缘的纵深透明度效果。
同时,在上述描边的过程中,还需要依据所述前景区域的边缘的第二 颜色信息,即描边时的颜色需要与所述前景区域边缘当前的颜色相同,只按照其深度信息在颜色深浅上有所不同。
这样,描边边线利用深度信息透明度渐变消失,越靠近双摄像头位置的描边边线越明显,在远离双摄像头位置的描边边线逐渐消失,可以达到类似3D立体卡片的效果。
所述处理器302,还配置为在对所述前景区域的边缘进行描边处理之后,调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
本实施例中,用户如果不满意获得的图片效果,还可以不断调整所述图片的对焦点,然后获得对焦点调整后的图片前景区域的第三深度信息,在描边时就可以在边缘的第三深度信息的深度值越小时描边的颜色越浅,这样重新对所述前景区域的边缘进行描边处理;即调整对焦点后,按照焦点调整后的图片的第三深度信息进行描边,获得不同效果的图片。
示例的,如拍摄场景是人物拍摄中,拍摄时为俯拍,对焦点在人物的头部,人物的头部是离镜头最近的,描边边线会较粗较明显,过肩部以下的描边开始透明度渐变逐渐消失;用户不满意,将对焦点改在人物的脚部,这样在描边时人物的脚部是离镜头最近的,描边边线会较粗较明显,过腿部以上的描边开始透明度渐变逐渐消失。
本实施例提供的终端,用户可以输入不同的对焦点,本终端的处理器可以根据不同对焦点获得不同的边缘深度信息,然后按照边缘深度信息的深度值与描边时颜色深度的正比关系,进行不同的描边处理,可以获得各种不同效果的图片。用户可以不断地调整输入对焦点,最终获得满足用户需要的效果的图片。
利用本实施例提供的终端俯拍人像时,该终端的处理器可以将双摄像头拍摄的图片中人物所在的前景区域的边缘进行描边处理,在俯拍时人物的头部是离镜头最近的,描边边线会较粗较明显,过肩部以下的描边开始透明度渐变逐渐消失;同时,本终端的处理器还可以对背景区域进行虚化,突出前景区域的人物;这样所述图片中的人物就会产生类似3D立体卡片的效果,人物形象更加生动。
本实施例提供的终端处理后的图片可以产生类似3D立体卡片的效果,使人物拍摄更有趣味;用户经常会编辑图片,或用照片制作成节日卡片或明信片,单纯的图片似乎太过单调,本实施例提供的终端采用卡通化描边处理图片,可以产生立体卡片效果,让照片更加好玩,为用户增添更多的乐趣。
实施例2
本发明实施例提供了一种图片处理方法,如图6所示,本实施例方法的处理流程包括以下步骤:
步骤601、拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息。
本实施例方法是通过终端实现的,所述终端可以为手机,在此步骤中所述终端可以应用双摄像头来拍摄获得图片,所述双摄像头可以如图4所示,位于终端的背面,仿人眼,在同一水平上左右分布。
在进行拍摄时,用户需要先打开终端上的拍摄应用,此时终端上的显示器上会显示出该终端的双摄像头对着的画面,用户可以调整拍摄角度,使终端的显示器上显示出用户将要拍摄的景物。然后,如图5所示,用户可以通过触摸终端上的拍摄键,拍摄图片。
双摄像头是利用仿生学原理,通过标定后的双摄像头得到同步曝光图像,因为双摄像头的两个摄像头之叫存在一定的距离,所以同一景物通过 两个镜头所成的像有一定的差别,既视差,因为视差信息的存在,就可以由视差来计算出景物的深度信息。
示例的,在同一时刻,该终端可以分别通过双摄像头上的两个摄像头拍摄获得同一个景物对应的两幅图像,将所述两幅图像利用立体校正算法校正获得两幅校正后图像;使用立体匹配算法获得两幅校正后图像之间的视差图D;由D中任意像素点的视差d,使用以下公式计算出所述拍摄取景画面中各个像素点的深度值Z:
Figure PCTCN2016099400-appb-000002
其中,f是该拍摄装置中两个摄像头像平面到主平面的距离,即小孔成像模型的焦距(本实施例中两个摄像头的f一样),即两个摄像头小孔成像模型的焦距,T是两个摄像头之间的间距。
双摄像头拍摄完图片后,按照上述方法就可以获得所述图片的第一深度信息,即获得所述图片上每个像素点的深度值;同时还可以获得图片的第一颜色信息,即每个像素点的颜色信息。示例的,每个像素点的颜色信息可以是用RGB值表示。
步骤602、根据所述图片的第一深度信息将所述图片分为背景区域和前景区域。
终端在获得的所述图片的深度信息后,就可以根据所述图片的深度信息将所述图片分为背景区域和前景区域。
示例的,如图5所示,终端拍摄的图片是人物风景图,所述终端可以根据所述图片的深度信息将所述图片分为背景区域即风景图像区域,前景区域即人物图像区域(图5中所示的虚线内的区域)。
步骤603、提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区 域的边缘进行描边处理。
终端可以从步骤601中获得的图片的第一深度信息和第一颜色信息中提取出所述前景区域的边缘的第二深度信息和第二颜色信息;所述前景区域的边缘的第二深度信息和第二颜色信息即所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
这样,终端在进行描边处理的时,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,即所述第二深度信息的深度值越小描边颜色越浅;这样,深度值越大,越靠近双摄像头,描边的颜色就要越深;深度值越小,越远离双摄像头,描边的颜色就要越浅,就可以得到一种沿着边缘的纵深透明度效果。
同时,在上述描边的过程中,还需要依据所述前景区域的边缘的第二颜色信息,即描边时的颜色需要与所述前景区域边缘当前的颜色相同,只按照其深度信息在颜色深浅上有所不同。
这样,描边边线利用深度信息透明度渐变消失,越靠近双摄像头位置的描边边线越明显,在远离双摄像头位置的描边边线逐渐消失,可以达到类似3D立体卡片的效果。
利用本实施例方法提供的终端俯拍人像时,该终端可以将双摄像头拍摄的图片中人物所在的前景区域的边缘进行描边处理,在俯拍时人物的头部是离镜头最近的,描边边线会较粗较明显,过肩部以下的描边开始透明度渐变逐渐消失;同时,本终端还可以对背景区域进行虚化,突出前景区域的人物;这样所述图片中的人物就会产生类似3D立体卡片的效果,人物形象更加生动。
本实施例提供的终端处理后的图片可以产生类似3D立体卡片的效果,使人物拍摄更有趣味;用户经常会编辑图片,或用照片制作成节日卡片或明信片,单纯的图片似乎太过单调,本实施例提供的终端采用卡通化描边 处理图片,可以产生立体卡片效果,让照片更加好玩,为用户增添更多的乐趣。
实施例3
本发明实施例提供了一种图片处理方法,如图7所示,本实施例方法的处理流程包括以下步骤:
步骤701、拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息。
本实施例方法是通过终端实现的,所述终端可以为手机,在此步骤中所述终端可以应用双摄像头来拍摄获得图片,所述双摄像头可以如图4所示,位于终端的背面,仿人眼,在同一水平上左右分布。
在进行拍摄时,用户需要先打开终端上的拍摄应用,此时终端上的显示器上会显示出该终端的双摄像头对着的画面,用户可以调整拍摄角度,使终端的显示器上显示出用户将要拍摄的景物。然后,如图5所示,用户可以通过触摸终端上的拍摄键,拍摄图片。
双摄像头是利用仿生学原理,通过标定后的双摄像头得到同步曝光图像,因为双摄像头的两个摄像头之叫存在一定的距离,所以同一景物通过两个镜头所成的像有一定的差别,既视差,因为视差信息的存在,就可以由视差来计算出景物的深度信息。
示例的,在同一时刻,该终端可以分别通过双摄像头上的两个摄像头拍摄获得同一个景物对应的两幅图像,将所述两幅图像利用立体校正算法校正获得两幅校正后图像;使用立体匹配算法获得两幅校正后图像之间的视差图D;由D中任意像素点的视差d,使用以下公式计算出所述拍摄取景画面中各个像素点的深度值Z:
其中,f是该拍摄装置中两个摄像头像平面到主平面的距离,即小孔成像模型的焦距(本实施例中两个摄像头的f一样),即两个摄像头小孔成像 模型的焦距,T是两个摄像头之间的间距。
双摄像头拍摄完图片后,按照上述方法就可以获得所述图片的第一深度信息,即获得所述图片上每个像素点的深度值;同时还可以获得图片的第一颜色信息,即每个像素点的颜色信息。示例的,每个像素点的颜色信息可以是用RGB值表示。
步骤702、根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图像的背景区域和前景区域。
终端在获得的所述图片的第一深度信息后,就可以根据所述图片的第一深度信息将所述图片分为背景区域和前景区域。
示例的,拍摄场景是人物拍摄,如图5所示,人物所在区域即虚线内的区域距离摄像机较近,深度值较大,可以被划分为前景区域,其他区域距离摄像机较远,深度值较小,可以被划分为背景区域。
在本实施例方法中,所述终端可以采用Mean Shift算法,根据所述图片的深度信息将所述图片划分为若干深度区域,然后确定出所述图像的背景区域和前景区域。
Mean Shift算法是一种有效的统计迭代算法,是由Fukuna-ga在1975年首先提出的。直到1995年,Cheng改进了Mean Shift算法中的核函数和权重函数,才扩大了该算法的适用范围。基于Mean Shift算法的图像分割是一种基于区域的分割方法,这种分割方法跟人眼的对图像的分析特性极其相近,并且具有很强的适应性和鲁棒性。它对对图像的平滑区域和图像纹理区域并不敏感,所以能够得到很好的分割结果。此算法已经在计算机视觉领域得到了较为广泛的应用并取得了较大的成功。本实施例可以应用Mean Shift算法,将所述图片分割为若干深度区域,然后将离摄像头较近的深度区域划分为前景区域,将离摄像头较远的深度区域划分为背景区域, 这样就可以确定出所述图片的背景区域和前景区域。
步骤703、将所述背景区域进行虚化。
本实施例方法还可以将背景虚化,更加凸显出前景区域中的目标图像。
这里假定拍摄场景是人物拍摄,人物为目标图像,距离摄像机较近,深度值较大,而风景等物体属于背景,距离摄像机较远,深度值较小,这样可将风景等物体所在区域也即是背景区域进行虚化,人物所在的区域也即前景区域保持清晰。这样,背景图像的加强虚化效果,更突出前景区域的人物;同时也有利于凸显出所述前景区域的边缘,为后续对该前景区域的边缘进行描边处理打下基础。
步骤704、采用图像锐化算法强化所述前景区域的边缘。
本实施例方法中所述终端还可以提取出所述前景区域的边缘信息,采用图像锐化算法强化所述前景区域的边缘。
图像锐化算法是一种增强图像边缘的滤波算法,可以使图像的边缘对比度加强。本实施例中利用图像锐化的算法,将所述前景区域的边缘线条增强扩充,使得前景区域的边缘显著化,方便后续对该前景区域的边缘进行描边处理。
这里假定拍摄场景是人物拍摄,如图5所示,确定人物所在区域即虚线内的区域为前景区域,其他区域为背景区域,这里所述的前景区域的边缘即人物所在的前景区域的外轮廓线,即图5中所示的虚线,强化所述前景区域的边缘即强化人物所在的前景区域的外轮廓虚线。
本实施例方法中步骤703和704并没有先后顺序,并且本实施例方法中终端可以在将所述图片分为背景区域和前景区域后,将所述背景区域进行虚化;或者采用图像锐化算法强化所述前景区域的边缘,或者,将所述背景区域进行虚化并采用图像锐化算法强化所述前景区域的边缘;这样都可以使得前景区域的边缘显著化,方便后续对该前景区域的边缘进行描边 处理。
步骤705、提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
终端可以从步骤701中获得的图片的第一深度信息和第一颜色信息中提取出所述前景区域的边缘的第二深度信息和第二颜色信息;所述前景区域的边缘的第二深度信息和第二颜色信息即所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
这样,终端在进行描边处理时,所述第二深度信息的深度值越小描边颜色越浅;即深度值越大,越靠近双摄像头,描边的颜色就要越深;深度值越小,越远离双摄像头,描边的颜色就要越浅,这样可以得到一种沿着边缘的纵深透明度效果。
同时,在上述描边的过程中,还需要依据所述前景区域的边缘的第二颜色信息,即描边时的颜色需要与所述前景区域边缘当前的颜色相同,只按照其深度信息在颜色深浅上有所不同。
这样,描边边线利用深度信息透明度渐变消失,越靠近双摄像头位置的边缘的描边边线越明显,在远离双摄像头位置的边缘的描边边线逐渐消失,可以达到类似3D立体卡片的效果。
利用本实施例方法提供的终端俯拍人像时,该终端可以将双摄像头拍摄的图片中人物所在的前景区域的边缘进行描边处理,在俯拍时人物的头部是离镜头最近的,描边边线会较粗较明显,过肩部以下的描边开始透明度渐变逐渐消失;同时,本终端还可以对背景区域进行虚化,突出前景区域的人物;这样所述图片中的人物就会产生类似3D立体卡片的效果,人物形象更加生动。
步骤706、调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
本实施例方法中,用户如果不满意步骤705中获得的图片效果,还可以不断调整所述图片的对焦点,然后获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理;即调整对焦点后,按照焦点调整后的第三深度信息进行描边,获得各种不同效果的图片。
示例的,如拍摄场景是人物拍摄中,拍摄时为俯拍,对焦点在人物的头部,人物的头部是离镜头最近的,描边边线会较粗较明显,过肩部以下的描边开始透明度渐变逐渐消失;若将对焦点改在人物的脚部,在描边时人物的脚部是离镜头最近的,描边边线会较粗较明显,过腿部以上的描边开始透明度渐变逐渐消失。
本实施例提供的终端,用户可以输入不同的对焦点,本终端可以根据不同对焦点获得不同的深度信息,然后按照所述深度信息的深度值越小描边颜色越浅的描边规则以及所述前景区域的边缘的颜色信息,进行不同的描边处理,进而可以获得各种不同效果的图片。用户可以不断地调整输入对焦点,最终获得满足用户需要的效果的图片。
本领域内的技术人员应明白,本发明的实施例可提供为方法、系统、或计算机程序产品。因此,本发明可采用硬件实施例、软件实施例、或结合软件和硬件方面的实施例的形式。而且,本发明可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括但不限于磁盘存储器和光学存储器等)上实施的计算机程序产品的形式。
本发明是参照根据本发明实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。应理解可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。
以上所述,仅为本发明的较佳实施例而已,并非用于限定本发明的保护范围。

Claims (20)

  1. 一种终端,所述终端包括:
    双摄像头,配置为拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息;
    处理器,配置为根据所述双摄像头获得的第一深度信息将所述图片分为背景区域和前景区域;提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
  2. 根据权利要求1所述的终端,其中,所述双摄像头,还配置为拍摄获得同一景物对应的两幅图像;
    所述处理器,还配置为将所述两幅图像利用立体校正算法校正获得两幅校正后图像;使用立体匹配算法获得两幅校正后图像之间的视差图;基于所述视差图计算出所述景物画面中各个像素点的深度值以及颜色信息。
  3. 根据权利要求1所述的终端,其中,所述处理器,还配置为提取出所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
  4. 根据权利要求1所述的终端,其中,所述处理器,还配置为根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图片的背景区域和前景区域。
  5. 根据权利要求1所述的终端,其中,所述处理器,还配置为在根据将所述图片分为背景区域和前景区域后,将所述背景区域虚化。
  6. 根据权利要求1所述的终端,其中,所述处理器,还配置为在将所述图片分为背景区域和前景区域后,采用图像锐化算法强化所述前景区域的边缘。
  7. 根据权利要求1所述的终端,其中,所述处理器,还用于在进行描 边处理时,按照所述第二深度信息的深度值与描边时颜色深度的正比关系进行描边。
  8. 根据权利要求7所述的终端,其中,所述处理器,还配置为在进行描边处理时,依据所述前景区域的边缘的第二颜色信息进行描边。
  9. 根据权利要求8所述的终端,其中,描边时的颜色与所述前景区域边缘当前的颜色相同。
  10. 根据权利要求1所述的终端,其中,所述处理器,还配置为在对所述前景区域的边缘进行描边处理之后,
    调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
  11. 一种图片处理方法,所述方法包括:
    拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息;
    根据所述图片的第一深度信息将所述图片分为背景区域和前景区域;
    提取出所述前景区域的边缘的第二深度信息和第二颜色信息,并根据所述前景区域的边缘的第二深度信息以及第二颜色信息,按照所述第二深度信息的深度值与描边时颜色深度的正比关系,对所述前景区域的边缘进行描边处理。
  12. 根据权利要求11所述的方法,其中,所述拍摄图片,并在拍摄之后获得所述图片的第一深度信息和第一颜色信息,包括:
    利用两个摄像头拍摄获得同一景物对应的两幅图像;
    将所述两幅图像利用立体校正算法校正获得两幅校正后图像;
    使用立体匹配算法获得两幅校正后图像之间的视差图;
    基于所述视差图计算出所述景物画面中各个像素点的深度值以及颜色信息。
  13. 根据权利要求11所述的方法,其中,所述提取出所述前景区域的边缘的第二深度信息和第二颜色信息,包括:
    提取出所述前景区域的外轮廓边线上的像素点的深度值和颜色信息。
  14. 根据权利要求11所述的方法,其中,所述根据所述图片的第一深度信息将所述图片分为背景区域和前景区域,包括:
    根据所述图片的第一深度信息,采用均值漂移Mean Shift算法将所述图片划分为若干深度区域,确定出所述图片的背景区域和前景区域。
  15. 根据权利要求11所述的方法,其中,在所述将所述图片分为背景区域和前景区域之后,所述方法还包括:
    将所述背景区域进行虚化。
  16. 根据权利要求11所述的方法,其中,在所述将所述图片分为背景区域和前景区域之后,所述方法还包括:
    采用图像锐化算法强化所述前景区域的边缘。
  17. 根据权利要求11所述的方法,其中,终端在进行描边处理时,按照所述第二深度信息的深度值与描边时颜色深度的正比关系进行描边。
  18. 根据权利要求17所述的方法,其中,终端在进行描边处理时,依据所述前景区域的边缘的第二颜色信息进行描边。
  19. 根据权利要求18所述的方法,其中,描边时的颜色与所述前景区域边缘当前的颜色相同。
  20. 根据权利要求11所述的方法,其中,在对所述前景区域的边缘进行描边处理之后,所述方法还包括:
    调整所述图片的对焦点,获得对焦点调整后的图片的所述前景区域的边缘的第三深度信息,根据所述第三深度信息以及所述前景区域的边缘的 第二颜色信息,按照所述第三深度信息的深度值与描边时颜色深度的正比关系,重新对所述前景区域的边缘进行描边处理。
PCT/CN2016/099400 2015-09-15 2016-09-19 一种图片处理方法及终端 WO2017045650A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510587487.2 2015-09-15
CN201510587487.2A CN105245774B (zh) 2015-09-15 2015-09-15 一种图片处理方法及终端

Publications (1)

Publication Number Publication Date
WO2017045650A1 true WO2017045650A1 (zh) 2017-03-23

Family

ID=55043252

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099400 WO2017045650A1 (zh) 2015-09-15 2016-09-19 一种图片处理方法及终端

Country Status (2)

Country Link
CN (1) CN105245774B (zh)
WO (1) WO2017045650A1 (zh)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108022227A (zh) * 2017-12-29 2018-05-11 努比亚技术有限公司 一种黑白背景照片获取方法、装置及计算机可读存储介质
CN108966402A (zh) * 2017-05-19 2018-12-07 浙江舜宇智能光学技术有限公司 Tof摄像模组和tof电路及其散热方法和制造方法以及应用
CN109146767A (zh) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 基于深度图的图像虚化方法及装置
CN110062157A (zh) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN112672198A (zh) * 2020-12-14 2021-04-16 海看网络科技(山东)股份有限公司 一种epg异形图组件及其展示方法
CN112862930A (zh) * 2021-03-15 2021-05-28 网易(杭州)网络有限公司 游戏场景的处理方法、装置及电子设备
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质
CN113538350A (zh) * 2021-06-29 2021-10-22 河北深保投资发展有限公司 一种基于多摄像头识别基坑深度的方法
CN113965663A (zh) * 2020-07-21 2022-01-21 深圳Tcl新技术有限公司 一种图像画质优化方法、智能终端及存储介质

Families Citing this family (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105245774B (zh) * 2015-09-15 2018-12-21 努比亚技术有限公司 一种图片处理方法及终端
US9912860B2 (en) 2016-06-12 2018-03-06 Apple Inc. User interface for camera effects
US10915996B2 (en) 2016-07-01 2021-02-09 Intel Corporation Enhancement of edges in images using depth information
CN106534693B (zh) * 2016-11-25 2019-10-25 努比亚技术有限公司 一种照片处理方法、装置及终端
CN106851063A (zh) * 2017-02-27 2017-06-13 努比亚技术有限公司 一种基于双摄像头的曝光调节终端及方法
CN107071275B (zh) * 2017-03-22 2020-07-14 珠海大横琴科技发展有限公司 一种图像合成方法及终端
CN106981045A (zh) * 2017-03-29 2017-07-25 努比亚技术有限公司 一种对照片进行重新着色的方法及装置
CN106851113A (zh) * 2017-03-30 2017-06-13 努比亚技术有限公司 一种基于双摄像头的拍照方法及移动终端
DK180859B1 (en) 2017-06-04 2022-05-23 Apple Inc USER INTERFACE CAMERA EFFECTS
CN107948516A (zh) * 2017-11-30 2018-04-20 维沃移动通信有限公司 一种图像处理方法、装置及移动终端
CN108171682B (zh) * 2017-12-04 2022-04-19 北京中科慧眼科技有限公司 基于远景的双目同步曝光率检测方法、系统及存储介质
CN110009555B (zh) * 2018-01-05 2020-08-14 Oppo广东移动通信有限公司 图像虚化方法、装置、存储介质及电子设备
US11722764B2 (en) 2018-05-07 2023-08-08 Apple Inc. Creative camera
CN109002185B (zh) * 2018-06-21 2022-11-08 北京百度网讯科技有限公司 一种三维动画处理的方法、装置、设备及存储介质
DK201870623A1 (en) 2018-09-11 2020-04-15 Apple Inc. USER INTERFACES FOR SIMULATED DEPTH EFFECTS
US11770601B2 (en) 2019-05-06 2023-09-26 Apple Inc. User interfaces for capturing and managing visual media
US11321857B2 (en) 2018-09-28 2022-05-03 Apple Inc. Displaying and editing images with depth information
US11128792B2 (en) 2018-09-28 2021-09-21 Apple Inc. Capturing and displaying images with multiple focal planes
CN109840881B (zh) * 2018-12-12 2023-05-05 奥比中光科技集团股份有限公司 一种3d特效图像生成方法、装置及设备
US11706521B2 (en) 2019-05-06 2023-07-18 Apple Inc. User interfaces for capturing and managing visual media
CN111127614B (zh) * 2019-12-25 2023-07-21 上海米哈游天命科技有限公司 模型的描边处理方法、装置、存储介质及终端
CN111080780B (zh) * 2019-12-26 2024-03-22 网易(杭州)网络有限公司 虚拟角色模型的边缘处理方法和装置
US11054973B1 (en) 2020-06-01 2021-07-06 Apple Inc. User interfaces for managing media
CN113938578A (zh) * 2020-07-13 2022-01-14 武汉Tcl集团工业研究院有限公司 一种图像虚化方法、存储介质及终端设备
CN112712536B (zh) * 2020-12-24 2024-04-30 Oppo广东移动通信有限公司 图像处理方法、芯片及电子装置
CN113077605B (zh) * 2021-02-23 2024-05-10 邹吉涛 云存储式间距预警平台及方法
CN113301320B (zh) * 2021-04-07 2022-11-04 维沃移动通信(杭州)有限公司 图像信息处理方法、装置和电子设备
US11778339B2 (en) 2021-04-30 2023-10-03 Apple Inc. User interfaces for altering visual media
US11539876B2 (en) 2021-04-30 2022-12-27 Apple Inc. User interfaces for altering visual media

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080485A1 (en) * 2008-09-30 2010-04-01 Liang-Gee Chen Chen Depth-Based Image Enhancement
CN103037075A (zh) * 2011-10-07 2013-04-10 Lg电子株式会社 移动终端及其离焦图像生成方法
CN103139476A (zh) * 2011-11-30 2013-06-05 佳能株式会社 图像摄取装置及图像摄取装置的控制方法
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
JP2014170368A (ja) * 2013-03-04 2014-09-18 Univ Of Tokyo 画像処理装置、方法及びプログラム並びに移動体
CN105245774A (zh) * 2015-09-15 2016-01-13 努比亚技术有限公司 一种图片处理方法及终端

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20120015165A (ko) * 2010-08-11 2012-02-21 엘지전자 주식회사 영상의 깊이감 조절 방법 및 이를 이용하는 이동 단말기
CN103473804A (zh) * 2013-08-29 2013-12-25 小米科技有限责任公司 一种图像处理方法、装置和终端设备

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100080485A1 (en) * 2008-09-30 2010-04-01 Liang-Gee Chen Chen Depth-Based Image Enhancement
CN103037075A (zh) * 2011-10-07 2013-04-10 Lg电子株式会社 移动终端及其离焦图像生成方法
CN103139476A (zh) * 2011-11-30 2013-06-05 佳能株式会社 图像摄取装置及图像摄取装置的控制方法
JP2014170368A (ja) * 2013-03-04 2014-09-18 Univ Of Tokyo 画像処理装置、方法及びプログラム並びに移動体
CN103871051A (zh) * 2014-02-19 2014-06-18 小米科技有限责任公司 图像处理方法、装置和电子设备
CN105245774A (zh) * 2015-09-15 2016-01-13 努比亚技术有限公司 一种图片处理方法及终端

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108966402A (zh) * 2017-05-19 2018-12-07 浙江舜宇智能光学技术有限公司 Tof摄像模组和tof电路及其散热方法和制造方法以及应用
CN109146767A (zh) * 2017-09-04 2019-01-04 成都通甲优博科技有限责任公司 基于深度图的图像虚化方法及装置
CN108022227A (zh) * 2017-12-29 2018-05-11 努比亚技术有限公司 一种黑白背景照片获取方法、装置及计算机可读存储介质
CN110062157A (zh) * 2019-04-04 2019-07-26 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN110062157B (zh) * 2019-04-04 2021-09-17 北京字节跳动网络技术有限公司 渲染图像的方法、装置、电子设备和计算机可读存储介质
CN113129207A (zh) * 2019-12-30 2021-07-16 武汉Tcl集团工业研究院有限公司 一种图片的背景虚化方法及装置、计算机设备、存储介质
CN113965663A (zh) * 2020-07-21 2022-01-21 深圳Tcl新技术有限公司 一种图像画质优化方法、智能终端及存储介质
CN112672198A (zh) * 2020-12-14 2021-04-16 海看网络科技(山东)股份有限公司 一种epg异形图组件及其展示方法
CN112862930A (zh) * 2021-03-15 2021-05-28 网易(杭州)网络有限公司 游戏场景的处理方法、装置及电子设备
CN112862930B (zh) * 2021-03-15 2024-04-12 网易(杭州)网络有限公司 游戏场景的处理方法、装置及电子设备
CN113538350A (zh) * 2021-06-29 2021-10-22 河北深保投资发展有限公司 一种基于多摄像头识别基坑深度的方法
CN113538350B (zh) * 2021-06-29 2022-10-04 河北深保投资发展有限公司 一种基于多摄像头识别基坑深度的方法

Also Published As

Publication number Publication date
CN105245774A (zh) 2016-01-13
CN105245774B (zh) 2018-12-21

Similar Documents

Publication Publication Date Title
WO2017045650A1 (zh) 一种图片处理方法及终端
WO2017050115A1 (zh) 一种图像合成方法和装置
CN106454121B (zh) 双摄像头拍照方法及装置
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2017067526A1 (zh) 图像增强方法及移动终端
CN106412324B (zh) 提示对焦对象的装置及方法
WO2016180325A1 (zh) 图像处理方法及装置
WO2017016511A1 (zh) 一种图像处理方法及装置、终端
CN106530241B (zh) 一种图像虚化处理方法和装置
US8780258B2 (en) Mobile terminal and method for generating an out-of-focus image
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
CN106851104B (zh) 一种根据用户视角进行拍摄的方法及装置
WO2017020836A1 (zh) 一种虚化处理深度图像的装置和方法
WO2017140182A1 (zh) 一种图像合成方法及装置、存储介质
CN106878588A (zh) 一种视频背景虚化终端及方法
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
CN104954689A (zh) 一种利用双摄像头获得照片的方法及拍摄装置
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
WO2018019128A1 (zh) 一种夜景图像的处理方法和移动终端
WO2017071542A1 (zh) 图像处理方法及装置
WO2018045945A1 (zh) 一种对焦方法及终端、存储介质
WO2017206656A1 (zh) 一种图像处理方法及终端、计算机存储介质
WO2017088618A1 (zh) 图片合成方法及装置
CN106851125B (zh) 一种移动终端及多重曝光拍摄方法
WO2017071469A1 (zh) 一种移动终端和图像拍摄方法、计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16845764

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205N DATED 24/07/2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16845764

Country of ref document: EP

Kind code of ref document: A1