WO2017045647A1 - 一种处理图像的移动终端和方法 - Google Patents

一种处理图像的移动终端和方法 Download PDF

Info

Publication number
WO2017045647A1
WO2017045647A1 PCT/CN2016/099235 CN2016099235W WO2017045647A1 WO 2017045647 A1 WO2017045647 A1 WO 2017045647A1 CN 2016099235 W CN2016099235 W CN 2016099235W WO 2017045647 A1 WO2017045647 A1 WO 2017045647A1
Authority
WO
WIPO (PCT)
Prior art keywords
original image
effect
module
processing
person
Prior art date
Application number
PCT/CN2016/099235
Other languages
English (en)
French (fr)
Inventor
戴向东
魏宇星
Original Assignee
努比亚技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 努比亚技术有限公司 filed Critical 努比亚技术有限公司
Publication of WO2017045647A1 publication Critical patent/WO2017045647A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/61Control of cameras or camera modules based on recognised objects
    • H04N23/611Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects

Definitions

  • This document relates to, but is not limited to, smart terminal technology, and more particularly to a mobile terminal and method for processing images.
  • the binocular camera of the mobile terminal adopts a mutual cooperation shooting technique to achieve better photographic effects such as depth of field and 3D shooting.
  • the binocular camera is a common component structure.
  • the binocular camera is fixedly placed on the back of the mobile terminal. The advantage is that the imaging speed is fast and the picture is clear, and the depth of field effect can be simulated, but the photo synthesis can only be performed according to a predetermined algorithm.
  • the binocular camera imaging system can obtain the left eye image and the right eye image at the same time, and the left eye image and the right eye image are processed by the imaging processing software to form an original image requiring preview framing.
  • Binoculars do not bring too many bright spots to the user. If special effects processing is required on the synthesized character images, the user can only manually perform subsequent special effects processing by means of computer software, and the user experience is not good.
  • Embodiments of the present invention provide a mobile terminal and method for processing an image, which can automatically and quickly synthesize an image processed by a special effect, thereby enhancing a user experience.
  • An embodiment of the present invention provides a mobile terminal that processes an image, where the mobile terminal includes: an acquisition module, a detection module, a separation module, a processing module, and a synthesis module;
  • Obtaining a module configured to acquire the captured by the binocular camera in the mobile terminal, and acquire To preview the original image of the framing;
  • a detecting module configured to detect the original image, and send a separation notification to the separation module when detecting that there is a person in the original image
  • a separation module configured to, after receiving the separation notification, separate a character in the original image from the original image
  • a processing module configured to perform special effects processing on the separated characters in the original image
  • a compositing module is configured to synthesize a character subjected to the effect processing into the original image.
  • the separation module is configured to separate a person in the original image from the original image by:
  • the intercepted character is a separated person in the original image that needs to preview the framing.
  • the synthesizing module includes: an obtaining unit, a recording unit, a determining unit, and a processing unit; wherein
  • An obtaining unit configured to obtain depth information of the original image that needs to preview the framing
  • a recording unit configured to record an added position of a character added after the special effect processing in the original image of the separated person
  • a determining unit configured to determine, according to the obtained depth information of the original image of the previewed view and the added position of the recorded effect processed person, in the original image of the separated person size;
  • the processing unit is configured to adjust the effect-processed person according to the size of the determined person after the special effect processing.
  • the mobile terminal further includes:
  • a prompting module configured to prompt the user to select an effect processing effect, and send an obtaining notification to the acquiring module
  • the acquiring module is further configured to: after receiving the obtaining notification, obtain an effect processing effect selected by the user.
  • the special effect processing effects include: a wax image effect, and/or a crayon effect, and/or a hue effect, and/or a beauty effect, and/or a style effect, and/or an artistic effect.
  • the detecting module is further configured to: when it is detected that the user presses the shooting button, send a storage notification to the processing module; the processing module is further configured to: after receiving the storage notification, store the character added with the special effect processing Original image.
  • it also includes:
  • the display module is set to display the original image of the character added with the effect processing in real time.
  • the separation module is configured to determine a human body edge contour of a person in the original image by:
  • the morphological method combined with threshold segmentation is used to determine the contour of the human body's edges in the original image.
  • the separation module is configured to determine a human body edge contour of a person in the original image by a morphological method combined with threshold segmentation by:
  • the original image is an image containing depth information.
  • the embodiment of the invention further provides a method for processing an image, which is applied to a mobile terminal with a binocular camera, the method comprising:
  • the binocular camera When the binocular camera is activated for shooting, acquiring an original image captured by the binocular camera and requiring previewing the framing;
  • the character subjected to the effect processing is synthesized into the original image.
  • the separating the characters in the original image from the original image comprises:
  • the intercepted character is a separated person in the original image that needs to preview the framing.
  • the synthesizing the character after the special effect processing into the original image comprises:
  • the character after the effect processing is adjusted according to the size of the determined character after the effect processing.
  • the method further includes:
  • the special effect processing effects include: a wax image effect, and/or a crayon effect, and/or a hue effect, and/or a beauty effect, and/or a style effect, and/or an artistic effect.
  • the method further includes:
  • the original image of the character to which the effect processing is added is stored.
  • the method further includes:
  • the original image of the character to which the effect has been added is displayed in real time.
  • determining a human body edge contour of the person in the original image includes:
  • the morphological method combined with threshold segmentation is used to determine the contour of the human body's edges in the original image.
  • the morphological method is combined with the threshold segmentation to determine the contour of the human body of the person in the original image, including:
  • the original image is an image containing depth information.
  • the technical solution of the embodiment of the present invention includes: an obtaining module, a detecting module, a separating module, a processing module, and a synthesizing module; wherein the acquiring module is configured to acquire an original image captured by the binocular camera and needs to preview the framing; and a detecting module, Providing to detect the original image, and when detecting that there is a person in the original image, sending a separation notification to the separation module; the separation module is configured to receive the separation notification, and separate the original from the original image a character in the image; a processing module configured to perform special effect processing on the separated characters in the original image; and a synthesizing module configured to synthesize the character subjected to the special effect processing into the original image.
  • the technical solution of the embodiment of the invention realizes automatically and quickly synthesizing the image processed by the special effect, thereby enhancing the user experience.
  • FIG. 1 is a block diagram showing the structure of a binocular camera device according to an embodiment of the present invention
  • FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal implementing each embodiment of the present invention.
  • FIG. 3 is a schematic diagram of a communication system supporting communication between mobile terminals according to an embodiment of the present invention.
  • FIG. 4 is a schematic structural diagram of a mobile terminal that processes an image according to an embodiment of the present invention.
  • FIG. 5 is a schematic structural diagram of a synthesis module in a mobile terminal according to an embodiment of the present invention.
  • FIG. 6 is a flowchart of a method for processing an image according to an embodiment of the present invention.
  • FIG. 7(a) is a schematic diagram of an image of a person separated according to an embodiment of the present invention.
  • FIG. 7(b) is a schematic diagram of a composite picture according to an embodiment of the present invention.
  • the mobile terminal can be implemented in a variety of forms.
  • the terminal described in the embodiments of the present invention may include, for example, a mobile phone, a smart phone, a notebook computer, a digital broadcast receiver, a PDA (Personal Digital Assistant), a PAD (Tablet), a PMP (Portable Multimedia Player), a navigation device Mobile terminals of the like and fixed terminals such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Tablett
  • PMP Portable Multimedia Player
  • a navigation device Mobile terminals of the like and fixed terminals such as digital TVs, desktop computers, and the like.
  • the terminal is a mobile terminal.
  • those skilled in the art will appreciate that configurations in accordance with embodiments of the present invention can be applied to fixed type terminals in addition to components that are specifically for mobile purposes.
  • the photographic lens 701 is composed of a plurality of optical lenses provided to form a subject image, and is a single focus lens or a zoom lens, and in the present embodiment, the photographic lens 701 is two.
  • the photographic lens 701 can be moved in the optical axis direction by the lens driving unit 711, and controls the focus position of the taking lens 701 based on the control signal from the lens driving control unit 712, and also controls the focus distance in the case of the zoom lens.
  • the lens drive control circuit 712 performs drive control of the lens drive unit 711 in accordance with a control command from the microcomputer 707.
  • An imaging element 702 is disposed in the vicinity of a position where the subject image is formed by the photographing lens 701 on the optical axis of the photographing lens 701.
  • the imaging element 702 functions as an imaging unit that captures a subject image and acquires captured image data.
  • Photodiodes constituting each pixel are two-dimensionally arranged in a matrix on the imaging element 702. Each photodiode generates a photoelectric conversion current corresponding to the amount of received light, and the photoelectric conversion current is charged by a capacitor connected to each photodiode.
  • the front surface of each pixel is provided with a Bayer array of RGB color filters.
  • the imaging element 702 is connected to an imaging circuit 703 which is in the imaging element 702
  • the charge accumulation control and the image signal readout control are performed, the read image signal (analog image signal) is reduced in reset noise, waveform shaping is performed, and gain improvement or the like is performed to obtain an appropriate signal level.
  • the imaging circuit 703 is connected to an analog-to-digital conversion (A/D) conversion unit 704 that performs analog-to-digital conversion on the analog image signal and outputs a digital image signal (hereinafter referred to as image data) to the bus 199.
  • A/D analog-to-digital conversion
  • the bus 199 is a transmission path set to transfer various data read or generated inside the camera.
  • the A/D conversion unit 704 is connected to the bus 199, and an image processor 705, a JPEG processor 706, a microcomputer 707, a dynamic random access memory (SDRAM) 708, and a memory interface are connected (hereinafter referred to as A memory I/F) 709, a liquid crystal display (LCD) driver 710.
  • a memory I/F liquid crystal display
  • the image processor 705 performs various types of images such as OB subtraction processing, white balance adjustment, color matrix calculation, gamma conversion, color difference signal processing, noise removal processing, simultaneous processing, edge processing, and the like on the image data based on the output of the imaging element 702. deal with.
  • the JPEG processor 706 compresses the image data read out from the SDRAM 708 in accordance with the JPEG compression method when the image data is recorded on the recording medium 715. Further, the JPEG processor 706 performs decompression of JPEG image data for image reproduction display.
  • the file recorded in the recording medium 715 is read, and after the compression processing is performed in the JPEG processor 706, the decompressed image data is temporarily stored in the SDRAM 708 and displayed on the LCD 716.
  • the JPEG method is employed as the image compression/decompression method.
  • the compression/decompression method is not limited thereto, and other compression/decompression methods such as MPEG, TIFF, and H.264 may be employed.
  • the microcomputer 707 functions as a control unit of the entire camera, and collectively controls various processing sequences of the camera.
  • the microcomputer 707 is connected to the operation unit 713 and the flash memory 714.
  • the operation unit 713 includes but is not limited to a physical button or a virtual button, and the entity or virtual button may be a power button, a camera button, an edit button, a dynamic image button, a reproduction button, a menu button, a cross button, an OK button, a delete button, and an enlarge button.
  • An operation member such as various input buttons and various input keys is detected to detect the operation state of these operation members.
  • the detection result is output to the microcomputer 707. Further, a touch panel is provided on the front surface of the LCD 716 as a display portion, and the touch position of the user is detected, and the touch position is output to the microcomputer 707.
  • the microcomputer 707 executes based on the detection result of the operation member from the operation unit 713 Various processing sequences corresponding to the user's operation. Also, here, the computer 707 may perform various processing sequences corresponding to the user's operation based on the detection result of the touch panel in front of the LCD 716.
  • the flash memory 714 stores programs for executing various processing sequences of the microcomputer 707.
  • the microcomputer 707 performs overall control of the camera in accordance with the program. Further, the flash memory 714 stores various adjustment values of the camera, and the microcomputer 707 reads out the adjustment value, and performs control of the camera in accordance with the adjustment value.
  • the SDRAM 708 is an electrically rewritable volatile memory that is set to temporarily store image data or the like.
  • the SDRAM 708 temporarily stores the image data output from the A/D conversion unit 704 and the image data processed in the image processor 705, the JPEG processor 706, and the like.
  • the memory interface 709 is connected to the recording medium 715, and performs control for writing image data and a file header attached to the image data to the recording medium 715 and reading out from the recording medium 715.
  • the recording medium 715 is, for example, a recording medium such as a memory card that can be detachably attached to the camera body.
  • the recording medium 715 is not limited thereto, and may be a hard disk or the like built in the camera body.
  • the LCD driver 710 is connected to the LCD 716, and stores image data processed by the image processor 705 in the SDRAM 708.
  • the image data stored in the SDRAM 708 is read and displayed on the LCD 716, or the image data stored in the JPEG processor 706 is compressed.
  • the JPEG processor 706 reads the compressed image data of the SDRAM 708, decompresses it, and displays the decompressed image data through the LCD 716.
  • the LCD 716 is disposed on the back of the camera body or the like to perform image display.
  • the LCD 716 is provided with a touch panel that detects a user's touch operation.
  • the liquid crystal display panel (LCD 716) is disposed in the present embodiment.
  • the present invention is not limited thereto, and various display panels such as an organic EL panel may be employed.
  • FIG. 2 is a schematic diagram showing the hardware structure of a mobile terminal that implements various embodiments of the present invention.
  • the mobile terminal 100 may include a wireless communication unit 110, an A/V (Audio/Video) input unit 120, a user input unit 130, a sensing unit 140, an output unit 150, a memory 160, an interface unit 170, a controller 180, and a power supply unit 190. and many more.
  • Figure 2 illustrates a mobile terminal having various components, but it should be understood that not all illustrated components are required to be implemented. More or fewer components can be implemented instead. The elements of the mobile terminal will be described in detail below.
  • Wireless communication unit 110 typically includes one or more components that permit radio communication between mobile terminal 100 and a wireless communication system or network.
  • the wireless communication unit may include at least one of a broadcast receiving module 111, a mobile communication module 112, a wireless internet module 113, a short-range communication module 114, and a location information module 115.
  • the broadcast receiving module 111 receives a broadcast signal and/or broadcast associated information from an external broadcast management server via a broadcast channel.
  • the broadcast channel can include a satellite channel and/or a terrestrial channel.
  • the broadcast management server may be a server that generates and transmits a broadcast signal and/or broadcast associated information or a server that receives a previously generated broadcast signal and/or broadcast associated information and transmits it to the terminal.
  • the broadcast signal may include a TV broadcast signal, a radio broadcast signal, a data broadcast signal, and the like.
  • the broadcast signal may further include a broadcast signal combined with a TV or radio broadcast signal.
  • the broadcast associated information may also be provided via a mobile communication network, and in this case, the broadcast associated information may be received by the mobile communication module 112.
  • the broadcast signal may exist in various forms, for example, it may exist in the form of Digital Multimedia Broadcasting (DMB) Electronic Program Guide (EPG), Digital Video Broadcasting Handheld (DVB-H) Electronic Service Guide (ESG), and the like.
  • the broadcast receiving module 111 can receive a signal broadcast by using various types of broadcast systems.
  • the broadcast receiving module 111 can use forward link media (MediaFLO) by using, for example, multimedia broadcast-terrestrial (DMB-T), digital multimedia broadcast-satellite (DMB-S), digital video broadcast-handheld (DVB-H)
  • MediaFLO forward link media
  • the digital broadcasting system of the @ ) data broadcasting system, the terrestrial digital broadcasting integrated service (ISDB-T), and the like receives digital broadcasting.
  • the broadcast receiving module 111 can be constructed as various broadcast systems suitable for providing broadcast signals as well as the above-described digital broadcast system.
  • the broadcast signal and/or broadcast associated information received via the broadcast receiving module 111 may be stored in the memory 160 (or other type of
  • the mobile communication module 112 transmits the radio signals to and/or receives radio signals from at least one of a base station (e.g., an access point, a Node B, etc.), an external terminal, and a server.
  • a base station e.g., an access point, a Node B, etc.
  • Such radio signals may include voice call signals, video call signals, or various types of data transmitted and/or received in accordance with text and/or multimedia messages.
  • the wireless internet module 113 supports wireless internet access of the mobile terminal.
  • the module can be internally or externally coupled to the terminal.
  • the wireless Internet access technologies involved in the module may include WLAN (Wireless LAN) (Wi-Fi), Wibro (Wireless Broadband), Wimax (Worldwide Interoperability for Microwave Access), HSDPA (High Speed Downlink Packet Access), etc. .
  • the short range communication module 114 is a module that is configured to support short range communication.
  • Some examples of short-range communication technology include Bluetooth TM, a radio frequency identification (RFID), infrared data association (IrDA), ultra wideband (UWB), ZigBee, etc. TM.
  • the location information module 115 is a module configured to check or acquire location information of the mobile terminal.
  • a typical example of a location information module is GPS (Global Positioning System).
  • GPS Global Positioning System
  • the GPS module 115 calculates distance information and accurate time information from three or more satellites and applies triangulation to the calculated information to accurately calculate three-dimensional current position information based on longitude, latitude, and altitude.
  • the method for calculating position and time information uses three satellites and corrects the calculated position and time information errors by using another satellite.
  • the GPS module 115 is capable of calculating speed information by continuously calculating current position information in real time.
  • the A/V input unit 120 is arranged to receive an audio or video signal.
  • the A/V input unit 120 may include a camera 121 and a microphone 1220 that processes image data of still pictures or video obtained by the image capturing device in a video capturing mode or an image capturing mode.
  • the processed image frame can be displayed on the display unit 151.
  • the image frames processed by the camera 121 may be stored in the memory 160 (or other storage medium) or transmitted via the wireless communication unit 110, and two or more cameras 1210 may be provided according to the configuration of the mobile terminal.
  • the microphone 122 can receive sound (audio data) via a microphone in an operation mode of a telephone call mode, a recording mode, a voice recognition mode, and the like, and can process such sound as audio data.
  • the processed audio (voice) data can be converted to a format output that can be transmitted to the mobile communication base station via the mobile communication module 112 in the case of a telephone call mode.
  • the microphone 122 can implement various types of noise cancellation (or suppression) algorithms to cancel (or suppress) noise or interference generated during the process of receiving and transmitting audio signals.
  • the user input unit 130 may generate key input data according to a command input by the user to control various operations of the mobile terminal.
  • the user input unit 130 allows the user to input various types of information, and may include a keyboard, a pot, a touch pad (eg, a touch sensitive component that detects changes in resistance, pressure, capacitance, etc. due to contact), a scroll wheel , rocker, etc.
  • a touch screen can be formed.
  • the sensing unit 140 detects the current state of the mobile terminal 100 (eg, the open or closed state of the mobile terminal 100), the location of the mobile terminal 100, the presence or absence of contact (ie, touch input) by the user with the mobile terminal 100, and the mobile terminal. Orientation of 100, acceleration or deceleration of the mobile terminal 100 And directions and the like, and a command or signal for controlling the operation of the mobile terminal 100 is generated.
  • the sensing unit 140 can sense whether the slide type phone is turned on or off.
  • the sensing unit 140 can detect whether the power supply unit 190 provides power or whether the interface unit 170 is coupled to an external device.
  • Sensing unit 140 may include proximity sensor 1410 which will be described below in connection with a touch screen.
  • the interface unit 170 serves as an interface through which at least one external device can connect with the mobile terminal 100.
  • the external device may include a wired or wireless headset port, an external power (or battery charger) port, a wired or wireless data port, a memory card port, a port configured to connect a device having an identification module, and an audio input/output. (I/O) port, video I/O port, headphone port, and more.
  • the identification module may be stored to verify various information used by the user using the mobile terminal 100 and may include a User Identification Module (UIM), a Customer Identification Module (SIM), a Universal Customer Identity Module (USIM), and the like.
  • the device having the identification module may take the form of a smart card, and thus the identification device may be connected to the mobile terminal 100 via a port or other connection device.
  • the interface unit 170 may be arranged to receive input (eg, data information, power, etc.) from an external device and transmit the received input to one or more components within the mobile terminal 100 or may be configured to be at the mobile terminal and external device Transfer data between.
  • the interface unit 170 may function as a path through which power is supplied from the base to the mobile terminal 100 or may be used as a transmission of various command signals allowing input from the base to the mobile terminal 100 The path to the terminal.
  • Various command signals or power input from the base can be used as signals for identifying whether the mobile terminal is accurately mounted on the base.
  • Output unit 150 is configured to provide an output signal (eg, an audio signal, a video signal, an alarm signal, a vibration signal, etc.) in a visual, audio, and/or tactile manner.
  • the output unit 150 may include a display unit 151, an audio output module 152, an alarm unit 153, and the like.
  • the display unit 151 can display information processed in the mobile terminal 100. For example, when the mobile terminal 100 is in a phone call mode, the display unit 151 can display a user interface (UI) or a graphical user interface (GUI) related to a call or other communication (eg, text messaging, multimedia file download, etc.). When the mobile terminal 100 is in a video call mode or an image capturing mode, the display unit 151 may display a captured image and/or a received image, a UI or GUI showing a video or image and related functions, and the like.
  • UI user interface
  • GUI graphical user interface
  • the display unit 151 can function as an input device and an output device.
  • the display unit 151 may include at least one of a liquid crystal display (LCD), a thin film transistor LCD (TFT-LCD), an organic light emitting diode (OLED) display, a flexible display, a three-dimensional (3D) display, and the like.
  • LCD liquid crystal display
  • TFT-LCD thin film transistor LCD
  • OLED organic light emitting diode
  • a flexible display a three-dimensional (3D) display, and the like.
  • 3D three-dimensional
  • Some of these displays may be configured to be transparent to allow a user to view from the outside, which may be referred to as a transparent display, and a typical transparent display may be, for example, a TOLED (Transparent Organic Light Emitting Diode) display or the like.
  • TOLED Transparent Organic Light Emitting Diode
  • the mobile terminal 100 may include two or more display units (or other display devices), for example, the mobile terminal may include an external display unit (not shown) and an internal display unit (not shown) .
  • the touch screen can be set to detect touch input pressure as well as touch input position and touch input area.
  • the audio output module 152 may convert audio data received by the wireless communication unit 110 or stored in the memory 160 when the mobile terminal is in a call signal receiving mode, a call mode, a recording mode, a voice recognition mode, a broadcast receiving mode, and the like.
  • the audio signal is output as sound.
  • the audio output module 152 can provide audio output (eg, call signal reception sound, message reception sound, etc.) associated with a particular function performed by the mobile terminal 100.
  • the audio output module 152 can include a speaker, a buzzer, and the like.
  • the alarm unit 153 can provide an output to notify the mobile terminal 100 of the occurrence of an event. Typical events may include call reception, message reception, key signal input, touch input, and the like. In addition to audio or video output, the alert unit 153 can provide an output in a different manner to notify of the occurrence of an event. For example, the alarm unit 153 can provide an output in the form of vibrations, and when a call, message, or some other incoming communication is received, the alarm unit 153 can provide a tactile output (ie, vibration) to notify the user of it. By providing such a tactile output, the user is able to recognize the occurrence of various events even when the user's mobile phone is in the user's pocket. The alarm unit 153 can also provide an output of the notification event occurrence via the display unit 151 or the audio output module 152.
  • the memory 160 may store a software program or the like for processing and control operations performed by the controller 180, or may temporarily store data (for example, a phone book, a message, a still image, a video, etc.) that has been output or is to be output. Moreover, the memory 160 can store data regarding vibrations and audio signals of various manners that are output when a touch is applied to the touch screen.
  • the memory 160 may include at least one type of storage medium including a flash memory, a hard disk, a multimedia card, a card type memory (eg, SD or DX memory, etc.), a random access memory (RAM), a static random access memory ( SRAM), read only memory (ROM), electrically erasable programmable read only memory (EEPROM), programmable read only memory (PROM), magnetic memory, magnetic disk, optical disk, and the like.
  • the mobile terminal 100 can cooperate with a network storage device that performs a storage function of the memory 160 through a network connection.
  • the controller 180 typically controls the overall operation of the mobile terminal. For example, the controller 180 performs the control and processing associated with voice calls, data communications, video calls, and the like. Additionally, the controller 180 can include a multimedia module 1810 that is configured to reproduce (or play back) multimedia data, and the multimedia module 1810 can be constructed within the controller 180 or can be configured to be separate from the controller 180. The controller 180 may perform a pattern recognition process to recognize a handwriting input or a picture drawing input performed on the touch screen as a character or an image.
  • the power supply unit 190 receives external power or internal power under the control of the controller 180 and provides appropriate power required to operate the various components and components.
  • the various embodiments described herein can be implemented in a computer readable medium using, for example, computer software, hardware, or any combination thereof.
  • the embodiments described herein may be through the use of application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable gate arrays ( An FPGA, a processor, a controller, a microcontroller, a microprocessor, at least one of the electronic units designed to perform the functions described herein, in some cases, such an embodiment may be at the controller 180 Implemented in the middle.
  • implementations such as procedures or functions may be implemented with separate software modules that permit the execution of at least one function or operation.
  • the software code can be implemented by a software application (or program) written in any suitable programming language, which can be stored in memory 160 and executed by
  • the mobile terminal has been described in terms of its function.
  • a slide type mobile terminal among various types of mobile terminals such as a folding type, a bar type, a swing type, a slide type mobile terminal, and the like will be described as an example. Therefore, the embodiment of the present invention can be applied to any type of mobile terminal, and is not limited to a slide type mobile terminal.
  • the mobile terminal 100 as shown in FIG. 2 may be configured to utilize data transmitted via a frame or a packet.
  • data transmitted via a frame or a packet Such as wired and wireless communication systems as well as satellite based communication systems operate.
  • a communication system in which a mobile terminal is operable according to an embodiment of the present invention will now be described with reference to FIG.
  • Such communication systems may use different air interfaces and/or physical layers.
  • air interfaces used by communication systems include, for example, Frequency Division Multiple Access (FDMA), Time Division Multiple Access (TDMA), Code Division Multiple Access (CDMA), and Universal Mobile Telecommunications System (UMTS) (in particular, Long Term Evolution (LTE)). ), Global System for Mobile Communications (GSM), etc.
  • FDMA Frequency Division Multiple Access
  • TDMA Time Division Multiple Access
  • CDMA Code Division Multiple Access
  • UMTS Universal Mobile Telecommunications System
  • LTE Long Term Evolution
  • GSM Global System for Mobile Communications
  • the following description relates to a CDMA communication system, but such teachings are equally applicable to other types of systems.
  • a CDMA wireless communication system may include a plurality of mobile terminals 100, a plurality of base stations (BS) 270, a base station controller (BSC) 275, and a mobile switching center (MSC) 280.
  • the MSC 280 is configured to interface with a public switched telephone network (PSTN) 290.
  • PSTN public switched telephone network
  • the MSC 280 is also configured to interface with a BSC 275 that can be coupled to the base station 270 via a backhaul line.
  • the backhaul line can be constructed in accordance with any of a number of well known interfaces including, for example, E1/T1, ATM, IP, PPP, Frame Relay, HDSL, ADSL, or xDSL. It will be appreciated that the system as shown in FIG. 3 may include multiple BSCs 2750.
  • Each BS 270 can serve one or more partitions (or regions), each of which is covered by a multi-directional antenna or an antenna directed to a particular direction radially away from the BS 270. Alternatively, each partition may be covered by two or more antennas that are set to receive diversity. Each BS 270 can be configured to support multiple frequency allocations, and each frequency allocation has a particular frequency spectrum (eg, 1.25 MHz, 5 MHz, etc.).
  • BS 270 may also be referred to as a Base Transceiver Subsystem (BTS) or other equivalent terminology.
  • BTS Base Transceiver Subsystem
  • the term "base station” can be used to generally refer to a single BSC 275 and at least one BS 270.
  • a base station can also be referred to as a "cell station.”
  • each partition of a particular BS 270 may be referred to as a plurality of cellular stations.
  • a broadcast transmitter (BT) 295 transmits a broadcast signal to the mobile terminal 100 operating within the system.
  • a broadcast receiving module 111 as shown in FIG. 2 is provided at the mobile terminal 100 to receive a broadcast signal transmitted by the BT 295.
  • GPS Global Positioning System
  • the satellite 300 helps locate at least one of the plurality of mobile terminals 100.
  • a plurality of satellites 300 are depicted, but it is understood that any number of The satellite gets useful positioning information.
  • the GPS module 115 as shown in Figure 2 is typically configured to cooperate with the satellite 300 to obtain desired positioning information. Instead of GPS tracking technology or in addition to GPS tracking technology, other techniques that can track the location of the mobile terminal can be used. Additionally, at least one GPS satellite 300 can selectively or additionally process satellite DMB transmissions.
  • BS 270 receives reverse link signals from various mobile terminals 100.
  • Mobile terminal 100 typically participates in calls, messaging, and other types of communications.
  • Each reverse link signal received by a particular base station 270 is processed within a particular BS 270.
  • the obtained data is forwarded to the relevant BSC 275.
  • the BSC provides call resource allocation and coordinated mobility management functions including a soft handoff procedure between the BSs 270.
  • the BSC 275 also routes the received data to the MSC 280, which provides additional routing services for interfacing with the PSTN 290.
  • PSTN 290 interfaces with MSC 280, which forms an interface with BSC 275, and BSC 275 controls BS 270 accordingly to transmit forward link signals to mobile terminal 100.
  • the mobile terminal includes a binocular camera. As shown in FIG. 4, the mobile terminal includes: an obtaining module 40, a detecting module 41, a separating module 42, and a processing module 43. And synthesis module 44. among them,
  • the obtaining module 40 is configured to acquire the original image captured by the binocular camera in the mobile terminal and needs to preview the framing.
  • the detecting module 41 is configured to detect the original image, and when detecting that there is a person in the original image, send a separation notification to the separation module 42.
  • the detecting module 41 is further configured to send a storage notification to the processing module 43 when the user is pressed to press the shooting button.
  • the separation module 42 is configured to separate the characters in the original image from the original image after receiving the separation notification.
  • the separation module 42 is configured to:
  • the intercepted character is a separated person in the original image that needs to preview the framing.
  • the separation module is configured to determine a contour of a human body edge of a person in the original image by:
  • the morphological method combined with threshold segmentation is used to determine the contour of the human body's edges in the original image.
  • the separation module is configured to determine a human body edge contour of a person in the original image by a morphological method combined with threshold segmentation by:
  • the processing module 43 is configured to perform special effects processing on the characters in the separated original image.
  • the processing module 43 is further configured to: after receiving the storage notification, store the original image of the character added with the special effect processing.
  • the compositing module 44 is configured to synthesize the character subjected to the effect processing into the original image.
  • the synthesizing module 44 includes: an obtaining unit 441, a recording unit 442, a determining unit 443, and a processing unit 444; as shown in FIG.
  • the obtaining unit 441 is configured to acquire depth information of an original image that needs to preview the framing
  • the recording unit 442 is configured to record an added position of the character added after the special effect processing in the original image of the separated person;
  • the determining unit 443 is configured to determine, according to the obtained depth information of the original image that needs to preview the framing and the added position of the recorded effect processed character, the size of the character after the special effect processing in the original image of the separated person;
  • the processing unit 444 is configured to adjust the effect-processed person according to the size of the determined person after the special effect processing.
  • the mobile terminal further includes a prompting module 45, configured to prompt the user to select a special effect processing effect, and send an acquisition notification to the obtaining module 40.
  • a prompting module 45 configured to prompt the user to select a special effect processing effect, and send an acquisition notification to the obtaining module 40.
  • the obtaining module 40 is further configured to: after receiving the obtaining notification, obtain the special selected by the user. Effect processing effect.
  • special effects processing effects include: wax image effects, and/or crayon effects, and/or hue effects, and/or beauty effects, and/or style effects, and/or artistic effects.
  • the mobile terminal further includes a display module 46 configured to display an original image of the character added with the effect processing in real time.
  • FIG. 6 is a flowchart of a method for processing an image according to an embodiment of the present invention, which is applied to a mobile terminal with a binocular camera built therein.
  • the method includes:
  • Step 501 Acquire the original image captured by the binocular camera and need to preview the framing.
  • the original image that needs to be previewed can be obtained through the binocular camera built in the mobile terminal.
  • the original image is an image containing depth information.
  • Step 502 detecting the original image, and when detecting that there is a person in the original image, separating the person in the original image from the original image.
  • S1 is the person in the separated original image, and the other area except S1 is the separated background area (or the original image of the separated person).
  • the characters in the original image are separated from the original image, including:
  • the intercepted character is a separated person in the original image that needs to preview the framing.
  • the contour of the human body in the original image can be determined by a classical morphological method combined with threshold segmentation. For example, obtaining depth information of a face by face recognition; acquiring each pixel point that is within a preset threshold range from the depth information of the obtained face; determining that each pixel obtained is in the same connection with the face a pixel of the region; extracting the determined pixel point and the pixel point in the face position to achieve separation of the character.
  • Step 503 Perform special effect processing on the characters in the separated original image.
  • Step 504 Synthesize the character after the special effect processing into the original image.
  • this step includes:
  • the character after the effect processing is adjusted according to the size of the determined character after the effect processing.
  • S1 is a person in the separated original image
  • S1' is S1 after the effect processing.
  • the method further includes:
  • the effects of special effects include: wax effect, or crayon effect, and / or hue effect, and / or beauty effect, and / or style effect, and / or artistic effect.
  • the method further includes:
  • the original image of the character to which the effect has been added is displayed in real time.
  • the method further includes:
  • the original image of the character to which the effect processing is added is stored.
  • the foregoing embodiment method can be implemented by means of software plus a necessary general hardware platform, and of course, can also be through hardware, but in many cases, the former is better.
  • Implementation Based on such understanding, the technical solution of the present invention, which is essential or contributes to the prior art, may be embodied in the form of a software product stored in a storage medium (such as ROM/RAM, disk,
  • the optical disc includes a number of instructions for causing a terminal device (which may be a cell phone, a computer, a server, an air conditioner, or a network device, etc.) to perform the method described in each embodiment of the present invention.
  • the above technical solution realizes automatic and rapid synthesis of images processed by special effects, and enhances the user's experience.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)
  • Telephone Function (AREA)

Abstract

一种处理图像的移动终端和方法,该移动终端包括:获取模块、检测模块、分离模块、处理模块和合成模块;其中,获取模块,设置为获取移动终端的双目摄像头捕获的,并且需要预览取景的原始图像;检测模块,设置为对所述原始图像进行检测,当检测到所述原始图像中有人物时,向分离模块发送分离通知;分离模块,设置为接收到分离通知,从所述原始图像中分离出所述原始图像中的人物;处理模块,设置为对分离出的所述原始图像中的人物进行特效处理;合成模块,设置为将进行特效处理后的人物合成至所述原始图像中。上述技术方案实现了自动、快速地合成经过特效处理的图像,增强了用户的体验感。

Description

一种处理图像的移动终端和方法 技术领域
本文涉及但不限于智能终端技术,尤指一种处理图像的移动终端和方法。
背景技术
目前,移动终端的双目摄像头都采用相互配合的拍摄技术,以达到更好的景深、3D拍摄等摄影效果。双目摄像头是一种常见的组成结构,双目摄像头固定安放在移动终端背面,其优势是成像速度快,画面清晰,可以模拟实现景深效果,但是只能按照预定算法进行照片合成。
双目摄像头成像系统可以在同一时刻得到左目图像和右目图像,并将左目图像和右目图像交由成像处理软件处理,以形成需要预览取景的原始图像。
目前的双目摄像头中,如华为荣耀6PLUS,均是通过特定算法实现二者之间合成为一张清晰图像,或者合成一张类似单反景深图像的大光圈效果,对于普通的人物拍摄来说,双目并没有给用户带来太多的亮点,如果需要对合成的人物图像进行特效处理,用户只能借助于电脑软件手动进行后续的特效处理,用户体验不好。
发明内容
以下是对本文详细描述的主题的概述。本概述并非是为了限制权利要求的保护范围。
本发明实施例提供了一种处理图像的移动终端和方法,能够自动、快速地合成经过特效处理的图像,增强用户的体验感。
本发明实施例提供了一种处理图像的移动终端,该移动终端包括:获取模块、检测模块、分离模块、处理模块和合成模块;其中,
获取模块,设置为获取移动终端中所述双目摄像头捕获的,并且获取需 要预览取景的原始图像;
检测模块,设置为对所述原始图像进行检测,当检测到所述原始图像中有人物时,向分离模块发送分离通知;
分离模块,设置为接收到分离通知后,从所述原始图像中分离出所述原始图像中的人物;
处理模块,设置为对分离出的所述原始图像中的人物进行特效处理;
合成模块,设置为将进行特效处理后的人物合成至所述原始图像中。
可选地,所述分离模块,是设置为通过如下方式实现从所述原始图像中分离出所述原始图像中的人物:
确定所述原始图像中的人物的人体边缘轮廓;
截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
可选地,所述合成模块包括:获取单元、记录单元、确定单元和处理单元;其中,
获取单元,设置为获取所述需要预览取景的原始图像的深度信息;
记录单元,设置为记录添加在所述已分离出人物的原始图像中特效处理后的人物的添加位置;
确定单元,设置为根据获得的所述需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的所述添加位置确定特效处理后的人物在所述已分离出人物的原始图像中的大小;
处理单元,设置为按照确定出的特效处理后的人物的大小调整特效处理后的人物。
可选地,该移动终端还包括:
提示模块,设置为提示用户选择特效处理效果,向所述获取模块发送获取通知;
所述获取模块,还设置为接收到获取通知后,获取用户选择的特效处理效果。
可选地,所述特效处理效果包括:蜡像效果,和/或蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
可选地,所述检测模块,还设置为当监测到用户按压拍摄按钮时,向处理模块发送存储通知;所述处理模块,还设置为接收到存储通知后,存储添加有特效处理后的人物的原始图像。
可选地,还包括:
显示模块,设置为实时显示添加有特效处理后的人物的原始图像。
可选地,所述分离模块是设置为通过如下方式实现确定所述原始图像中的人物的人体边缘轮廓:
通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。
可选地,所述分离模块是设置为通过如下方式实现通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓:
通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
可选地,所述原始图像为一副包含深度信息的图像。
本发明实施例还提供了一种处理图像的方法,应用于内置有双目摄像头的移动终端中,所述包括:
当启动所述双目摄像头进行拍摄时,获取所述双目摄像头捕获的,并且需要预览取景的原始图像;
对所述原始图像进行检测,当检测到所述原始图像中有人物时,从所述原始图像中分离出所述原始图像中的人物;
对分离出的所述原始图像中的人物进行特效处理;
将进行特效处理后的人物合成至所述原始图像中。
可选地,所述从所述原始图像中分离出所述原始图像中的人物,包括:
确定所述原始图像中的人物的人体边缘轮廓;
截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
可选地,所述将进行特效处理后的人物合成至所述原始图像中,包括:
获取所述需要预览取景的原始图像的深度信息;
记录添加在所述已分离出人物的原始图像中特效处理后的人物的添加位置;
根据获得的所述需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的所述添加位置确定特效处理后的人物在所述已分离出人物的原始图像中的大小;
按照确定出的特效处理后的人物的大小调整特效处理后的人物。
可选地,在所述从所述原始图像中分离出所述原始图像中的人物之后,在所述对分离出的所述原始图像中的人物进行特效处理之前,该方法还包括:
提示用户选择特效处理效果;
获取用户选择的特效处理效果。
可选地,所述特效处理效果包括:蜡像效果,和/或蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
可选地,所述方法还包括:
当监测到用户按压拍摄按钮时,存储添加有特效处理后的人物的原始图像。
可选地,所述方法还包括:
实时显示添加有特效处理后的人物的原始图像。
可选地,确定所述原始图像中的人物的人体边缘轮廓,包括:
通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。
可选地,通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓,包括:
通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
可选地,所述原始图像为一副包含深度信息的图像。
本发明实施例技术方案包括:获取模块、检测模块、分离模块、处理模块和合成模块;其中,获取模块,设置为获取所述双目摄像头捕获的,并且需要预览取景的原始图像;检测模块,设置为对所述原始图像进行检测,当检测到所述原始图像中有人物时,向分离模块发送分离通知;分离模块,设置为接收到分离通知,从所述原始图像中分离出所述原始图像中的人物;处理模块,设置为对分离出的所述原始图像中的人物进行特效处理;合成模块,设置为将进行特效处理后的人物合成至所述原始图像中。本发明实施例技术方案实现了自动、快速地合成经过特效处理的图像,增强了用户的体验感。
在阅读并理解了附图和详细描述后,可以明白其他方面。
附图概述
图1为本发明实施例的双目摄像装置的结构框图;
图2为实现本发明每个实施例的移动终端的硬件结构示意;
图3为支持本发明实施例移动终端之间进行通信的通信系统的示意图;
图4为本发明实施例处理图像的移动终端的结构示意图;
图5为本发明实施例移动终端中合成模块的结构示意图;
图6为本发明实施例处理图像的方法的流程图;
图7(a)为本发明实施例分离出的人物图像的示意图;
图7(b)为本发明实施例合成图片的示意图。
本发明的实施方式
下面将结合附图及实施例对本发明的技术方案进行更详细的说明。
现在将参考附图描述实现本发明各个实施例的移动终端。在后续的描述中,使用用于表示元件的诸如“模块”、“部件”或“单元”的后缀仅为了有利于本发明的说明,其本身并没有特定的意义。因此,"模块"与"部件"可以混合地使用。
移动终端可以以多种形式来实施。例如,本发明实施例中描述的终端可以包括诸如移动电话、智能电话、笔记本电脑、数字广播接收器、PDA(个人数字助理)、PAD(平板电脑)、PMP(便携式多媒体播放器)、导航装置等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。下面,假设终端是移动终端。然而,本领域技术人员将理解的是,除了特别用于移动目的的元件之外,根据本发明的实施方式的构造也能够应用于固定类型的终端。
图1为本发明实施例的相机的主要电气结构的框图。摄影镜头701由设置为形成被摄体像的多个光学镜头构成,是单焦点镜头或变焦镜头,其中在本实施例中,摄影镜头701为两个。摄影镜头701能够通过镜头驱动部711在光轴方向上移动,根据来自镜头驱动控制部712的控制信号,控制摄影镜头701的焦点位置,在变焦镜头的情况下,也控制焦点距离。镜头驱动控制电路712按照来自微型计算机707的控制命令进行镜头驱动部711的驱动控制。
在摄影镜头701的光轴上、由摄影镜头701形成被摄体像的位置附近配置有摄像元件702。摄像元件702发挥作为对被摄体像摄像并取得摄像图像数据的摄像部的功能。在摄像元件702上二维地呈矩阵状配置有构成各像素的光电二极管。各光电二极管产生与受光量对应的光电转换电流,该光电转换电流由与各光电二极管连接的电容器进行电荷蓄积。各像素的前表面配置有拜耳排列的RGB滤色器。
摄像元件702与摄像电路703连接,该摄像电路703在摄像元件702中 进行电荷蓄积控制和图像信号读出控制,对该读出的图像信号(模拟图像信号)降低重置噪声后进行波形整形,进而进行增益提高等以成为适当的信号电平。摄像电路703与模拟数字转换(A/D)转换部704连接,该A/D转换部704对模拟图像信号进行模数转换,向总线199输出数字图像信号(以下称之为图像数据)。
总线199是设置为传送在相机的内部读出或生成的各种数据的传送路径。在总线199连接着上述A/D转换部704,此外还连接着图像处理器705、JPEG处理器706、微型计算机707、动态随机访问存储器(SDRAM,Synchronous DRAM)708、存储器接口(以下称之为存储器I/F)709、液晶显示器(LCD,Liquid Crystal Display)驱动器710。
图像处理器705对基于摄像元件702的输出的图像数据进行OB相减处理、白平衡调整、颜色矩阵运算、伽马转换、色差信号处理、噪声去除处理、同时化处理、边缘处理等各种图像处理。JPEG处理器706在将图像数据记录于记录介质715时,按照JPEG压缩方式压缩从SDRAM 708读出的图像数据。此外,JPEG处理器706为了进行图像再现显示而进行JPEG图像数据的解压缩。进行解压缩时,读出记录在记录介质715中的文件,在JPEG处理器706中实施了解压缩处理后,将解压缩的图像数据暂时存储于SDRAM 708中并在LCD716上进行显示。另外,在本实施例中,作为图像压缩解压缩方式采用的是JPEG方式,然而压缩解压缩方式不限于此,当然可以采用MPEG、TIFF、H.264等其他的压缩解压缩方式。
微型计算机707发挥作为该相机整体的控制部的功能,统一控制相机的各种处理序列。微型计算机707连接着操作单元713和闪存714。
操作单元713包括但不限于实体按键或者虚拟按键,该实体或虚拟按键可以为电源按钮、拍照键、编辑按键、动态图像按钮、再现按钮、菜单按钮、十字键、OK按钮、删除按钮、放大按钮等各种输入按钮和各种输入键等操作部件,检测这些操作部材的操作状态。
将检测结果向微型计算机707输出。此外,在作为显示部的LCD716的前表面设有触摸面板,检测用户的触摸位置,将该触摸位置向微型计算机707输出。微型计算机707根据来自操作单元713的操作部材的检测结果,执行 与用户的操作对应的各种处理序列。同样,此处可以是计算机707根据LCD716前面的触摸面板的检测结果,执行与用户的操作对应的各种处理序列。
闪存714存储用于执行微型计算机707的各种处理序列的程序。微型计算机707根据该程序进行相机整体的控制。此外,闪存714存储相机的各种调整值,微型计算机707读出调整值,按照该调整值进行相机的控制。
SDRAM 708是设置为对图像数据等进行暂时存储的可电改写的易失性存储器。该SDRAM708暂时存储从A/D转换部704输出的图像数据和在图像处理器705、JPEG处理器706等中进行了处理后的图像数据。
存储器接口709与记录介质715连接,进行将图像数据和附加在图像数据中的文件头等数据写入记录介质715和从记录介质715中读出的控制。记录介质715例如为能够在相机主体上自由拆装的存储器卡等记录介质,然而不限于此,也可以是内置在相机主体中的硬盘等。
LCD驱动器710与LCD716连接,将由图像处理器705处理后的图像数据存储于SDRAM708,需要显示时,读取SDRAM708存储的图像数据并在LCD716上显示,或者,JPEG处理器706压缩过的图像数据存储于SDRAM708,在需要显示时,JPEG处理器706读取SDRAM708的压缩过的图像数据,再进行解压缩,将解压缩后的图像数据通过LCD716进行显示。
LCD716配置在相机主体的背面等上,进行图像显示。该LCD716设有检测用户的触摸操作的触摸面板。另外,作为显示部,在本实施例中配置的是液晶表示面板(LCD716),然而不限于此,也可以采用有机EL面板等各种显示面板。
图2为实现本发明各个实施例的移动终端的硬件结构示意图。
移动终端100可以包括无线通信单元110、A/V(音频/视频)输入单元120、用户输入单元130、感测单元140、输出单元150、存储器160、接口单元170、控制器180和电源单元190等等。图2示出了具有各种组件的移动终端,但是应理解的是,并不要求实施所有示出的组件。可以替代地实施更多或更少的组件。将在下面详细描述移动终端的元件。
无线通信单元110通常包括一个或多个组件,其允许移动终端100与无线通信系统或网络之间的无线电通信。例如,无线通信单元可以包括广播接收模块111、移动通信模块112、无线互联网模块113、短程通信模块114和位置信息模块115中的至少一个。
广播接收模块111经由广播信道从外部广播管理服务器接收广播信号和/或广播相关信息。广播信道可以包括卫星信道和/或地面信道。广播管理服务器可以是生成并发送广播信号和/或广播相关信息的服务器或者接收之前生成的广播信号和/或广播相关信息并且将其发送给终端的服务器。广播信号可以包括TV广播信号、无线电广播信号、数据广播信号等等。而且,广播信号可以进一步包括与TV或无线电广播信号组合的广播信号。广播相关信息也可以经由移动通信网络提供,并且在该情况下,广播相关信息可以由移动通信模块112来接收。广播信号可以以各种形式存在,例如,其可以以数字多媒体广播(DMB)的电子节目指南(EPG)、数字视频广播手持(DVB-H)的电子服务指南(ESG)等等的形式而存在。广播接收模块111可以通过使用各种类型的广播系统接收信号广播。特别地,广播接收模块111可以通过使用诸如多媒体广播-地面(DMB-T)、数字多媒体广播-卫星(DMB-S)、数字视频广播-手持(DVB-H),前向链路媒体(MediaFLO@)的数据广播系统、地面数字广播综合服务(ISDB-T)等等的数字广播系统接收数字广播。广播接收模块111可以被构造为适合提供广播信号的各种广播系统以及上述数字广播系统。经由广播接收模块111接收的广播信号和/或广播相关信息可以存储在存储器160(或者其它类型的存储介质)中。
移动通信模块112将无线电信号发送到基站(例如,接入点、节点B等等)、外部终端以及服务器中的至少一个和/或从其接收无线电信号。这样的无线电信号可以包括语音通话信号、视频通话信号、或者根据文本和/或多媒体消息发送和/或接收的各种类型的数据。
无线互联网模块113支持移动终端的无线互联网接入。该模块可以内部或外部地耦接到终端。该模块所涉及的无线互联网接入技术可以包括WLAN(无线LAN)(Wi-Fi)、Wibro(无线宽带)、Wimax(全球微波互联接入)、HSDPA(高速下行链路分组接入)等等。
短程通信模块114是设置为支持短程通信的模块。短程通信技术的一些示例包括蓝牙TM、射频识别(RFID)、红外数据协会(IrDA)、超宽带(UWB)、紫蜂TM等等。
位置信息模块115是设置为检查或获取移动终端的位置信息的模块。位置信息模块的典型示例是GPS(全球定位系统)。根据当前的技术,GPS模块115计算来自三个或更多卫星的距离信息和准确的时间信息并且对于计算的信息应用三角测量法,从而根据经度、纬度和高度准确地计算三维当前位置信息。当前,用于计算位置和时间信息的方法使用三颗卫星并且通过使用另外的一颗卫星校正计算出的位置和时间信息的误差。此外,GPS模块115能够通过实时地连续计算当前位置信息来计算速度信息。
A/V输入单元120设置为接收音频或视频信号。A/V输入单元120可以包括相机121和麦克风1220,相机121对在视频捕获模式或图像捕获模式中由图像捕获装置获得的静态图片或视频的图像数据进行处理。处理后的图像帧可以显示在显示单元151上。经相机121处理后的图像帧可以存储在存储器160(或其它存储介质)中或者经由无线通信单元110进行发送,可以根据移动终端的构造提供两个或更多相机1210。麦克风122可以在电话通话模式、记录模式、语音识别模式等等运行模式中经由麦克风接收声音(音频数据),并且能够将这样的声音处理为音频数据。处理后的音频(语音)数据可以在电话通话模式的情况下转换为可经由移动通信模块112发送到移动通信基站的格式输出。麦克风122可以实施各种类型的噪声消除(或抑制)算法以消除(或抑制)在接收和发送音频信号的过程中产生的噪声或者干扰。
用户输入单元130可以根据用户输入的命令生成键输入数据以控制移动终端的各种操作。用户输入单元130允许用户输入各种类型的信息,并且可以包括键盘、锅仔片、触摸板(例如,检测由于被接触而导致的电阻、压力、电容等等的变化的触敏组件)、滚轮、摇杆等等。特别地,当触摸板以层的形式叠加在显示单元151上时,可以形成触摸屏。
感测单元140检测移动终端100的当前状态,(例如,移动终端100的打开或关闭状态)、移动终端100的位置、用户对于移动终端100的接触(即,触摸输入)的有无、移动终端100的取向、移动终端100的加速或减速移动 和方向等等,并且生成用于控制移动终端100的操作的命令或信号。例如,当移动终端100实施为滑动型移动电话时,感测单元140可以感测该滑动型电话是打开还是关闭。另外,感测单元140能够检测电源单元190是否提供电力或者接口单元170是否与外部装置耦接。感测单元140可以包括接近传感器1410将在下面结合触摸屏来对此进行描述。
接口单元170用作至少一个外部装置与移动终端100连接可以通过的接口。例如,外部装置可以包括有线或无线头戴式耳机端口、外部电源(或电池充电器)端口、有线或无线数据端口、存储卡端口、设置为连接具有识别模块的装置的端口、音频输入/输出(I/O)端口、视频I/O端口、耳机端口等等。识别模块可以是存储用于验证用户使用移动终端100的各种信息并且可以包括用户识别模块(UIM)、客户识别模块(SIM)、通用客户识别模块(USIM)等等。另外,具有识别模块的装置(下面称为"识别装置")可以采取智能卡的形式,因此,识别装置可以经由端口或其它连接装置与移动终端100连接。接口单元170可以设置为接收来自外部装置的输入(例如,数据信息、电力等等)并且将接收到的输入传输到移动终端100内的一个或多个元件或者可以设置为在移动终端和外部装置之间传输数据。
另外,当移动终端100与外部底座连接时,接口单元170可以用作允许通过其将电力从底座提供到移动终端100的路径或者可以用作允许从底座输入的各种命令信号通过其传输到移动终端的路径。从底座输入的各种命令信号或电力可以用作用于识别移动终端是否准确地安装在底座上的信号。输出单元150被构造为以视觉、音频和/或触觉方式提供输出信号(例如,音频信号、视频信号、警报信号、振动信号等等)。输出单元150可以包括显示单元151、音频输出模块152、警报单元153等等。
显示单元151可以显示在移动终端100中处理的信息。例如,当移动终端100处于电话通话模式时,显示单元151可以显示与通话或其它通信(例如,文本消息收发、多媒体文件下载等等)相关的用户界面(UI)或图形用户界面(GUI)。当移动终端100处于视频通话模式或者图像捕获模式时,显示单元151可以显示捕获的图像和/或接收的图像、示出视频或图像以及相关功能的UI或GUI等等。
同时,当显示单元151和触摸板以层的形式彼此叠加以形成触摸屏时,显示单元151可以用作输入装置和输出装置。显示单元151可以包括液晶显示器(LCD)、薄膜晶体管LCD(TFT-LCD)、有机发光二极管(OLED)显示器、柔性显示器、三维(3D)显示器等等中的至少一种。这些显示器中的一些可以被构造为透明状以允许用户从外部观看,这可以称为透明显示器,典型的透明显示器可以例如为TOLED(透明有机发光二极管)显示器等等。根据特定想要的实施方式,移动终端100可以包括两个或更多显示单元(或其它显示装置),例如,移动终端可以包括外部显示单元(未示出)和内部显示单元(未示出)。触摸屏可设置为检测触摸输入压力以及触摸输入位置和触摸输入面积。
音频输出模块152可以在移动终端处于呼叫信号接收模式、通话模式、记录模式、语音识别模式、广播接收模式等等模式下时,将无线通信单元110接收的或者在存储器160中存储的音频数据转换音频信号并且输出为声音。而且,音频输出模块152可以提供与移动终端100执行的特定功能相关的音频输出(例如,呼叫信号接收声音、消息接收声音等等)。音频输出模块152可以包括扬声器、蜂鸣器等等。
警报单元153可以提供输出以将事件的发生通知给移动终端100。典型的事件可以包括呼叫接收、消息接收、键信号输入、触摸输入等等。除了音频或视频输出之外,警报单元153可以以不同的方式提供输出以通知事件的发生。例如,警报单元153可以以振动的形式提供输出,当接收到呼叫、消息或一些其它进入通信(incoming communication)时,警报单元153可以提供触觉输出(即,振动)以将其通知给用户。通过提供这样的触觉输出,即使在用户的移动电话处于用户的口袋中时,用户也能够识别出各种事件的发生。警报单元153也可以经由显示单元151或音频输出模块152提供通知事件的发生的输出。
存储器160可以存储由控制器180执行的处理和控制操作的软件程序等等,或者可以暂时地存储己经输出或将要输出的数据(例如,电话簿、消息、静态图像、视频等等)。而且,存储器160可以存储关于当触摸施加到触摸屏时输出的各种方式的振动和音频信号的数据。
存储器160可以包括至少一种类型的存储介质,所述存储介质包括闪存、硬盘、多媒体卡、卡型存储器(例如,SD或DX存储器等等)、随机访问存储器(RAM)、静态随机访问存储器(SRAM)、只读存储器(ROM)、电可擦除可编程只读存储器(EEPROM)、可编程只读存储器(PROM)、磁性存储器、磁盘、光盘等等。而且,移动终端100可以与通过网络连接执行存储器160的存储功能的网络存储装置协作。
控制器180通常控制移动终端的总体操作。例如,控制器180执行与语音通话、数据通信、视频通话等等相关的控制和处理。另外,控制器180可以包括设置为再现(或回放)多媒体数据的多媒体模块1810,多媒体模块1810可以构造在控制器180内,或者可以构造为与控制器180分离。控制器180可以执行模式识别处理,以将在触摸屏上执行的手写输入或者图片绘制输入识别为字符或图像。
电源单元190在控制器180的控制下接收外部电力或内部电力并且提供操作各元件和组件所需的适当的电力。
这里描述的各种实施方式可以以使用例如计算机软件、硬件或其任何组合的计算机可读介质来实施。对于硬件实施,这里描述的实施方式可以通过使用特定用途集成电路(ASIC)、数字信号处理器(DSP)、数字信号处理装置(DSPD)、可编程逻辑装置(PLD)、现场可编程门阵列(FPGA)、处理器、控制器、微控制器、微处理器、被设计为执行这里描述的功能的电子单元中的至少一种来实施,在一些情况下,这样的实施方式可以在控制器180中实施。对于软件实施,诸如过程或功能的实施方式可以与允许执行至少一种功能或操作的单独的软件模块来实施。软件代码可以由以任何适当的编程语言编写的软件应用程序(或程序)来实施,软件代码可以存储在存储器160中并且由控制器180执行。
至此,己经按照其功能描述了移动终端。下面,为了简要起见,将描述诸如折叠型、直板型、摆动型、滑动型移动终端等等的各种类型的移动终端中的滑动型移动终端作为示例。因此,本发明实施例能够应用于任何类型的移动终端,并且不限于滑动型移动终端。
如图2中所示的移动终端100可以被构造为利用经由帧或分组发送数据 的诸如有线和无线通信系统以及基于卫星的通信系统来操作。
现在将参考图3描述其中根据本发明实施例的移动终端能够操作的通信系统。
这样的通信系统可以使用不同的空中接口和/或物理层。例如,由通信系统使用的空中接口包括例如频分多址(FDMA)、时分多址(TDMA)、码分多址(CDMA)和通用移动通信系统(UMTS)(特别地,长期演进(LTE))、全球移动通信系统(GSM)等等。作为非限制性示例,下面的描述涉及CDMA通信系统,但是这样的教导同样适用于其它类型的系统。
参考图3,CDMA无线通信系统可以包括多个移动终端100、多个基站(BS)270、基站控制器(BSC)275和移动交换中心(MSC)280。MSC280被构造为与公共电话交换网络(PSTN)290形成接口。MSC280还被构造为与可以经由回程线路耦接到基站270的BSC275形成接口。回程线路可以根据若干己知的接口中的任一种来构造,所述接口包括例如E1/T1、ATM,IP、PPP、帧中继、HDSL、ADSL或xDSL。将理解的是,如图3中所示的系统可以包括多个BSC2750。
每个BS270可以服务一个或多个分区(或区域),由多向天线或指向特定方向的天线覆盖的每个分区放射状地远离BS270。或者,每个分区可以由设置为分集接收的两个或更多天线覆盖。每个BS270可以被构造为支持多个频率分配,并且每个频率分配具有特定频谱(例如,1.25MHz,5MHz等等)。
分区与频率分配的交叉可以被称为CDMA信道。BS270也可以被称为基站收发器子系统(BTS)或者其它等效术语。在这样的情况下,术语"基站"可以用于笼统地表示单个BSC275和至少一个BS270。基站也可以被称为"蜂窝站"。或者,特定BS270的各分区可以被称为多个蜂窝站。
如图3中所示,广播发射器(BT)295将广播信号发送给在系统内操作的移动终端100。如图2中所示的广播接收模块111被设置在移动终端100处以接收由BT295发送的广播信号。在图3中,示出了几个全球定位系统(GPS)卫星300。卫星300帮助定位多个移动终端100中的至少一个。
在图3中,描绘了多个卫星300,但是理解的是,可以利用任何数目的 卫星获得有用的定位信息。如图2中所示的GPS模块115通常被构造为与卫星300配合以获得想要的定位信息。替代GPS跟踪技术或者在GPS跟踪技术之外,可以使用可以跟踪移动终端的位置的其它技术。另外,至少一个GPS卫星300可以选择性地或者额外地处理卫星DMB传输。
作为无线通信系统的一个典型操作,BS270接收来自各种移动终端100的反向链路信号。移动终端100通常参与通话、消息收发和其它类型的通信。特定基站270接收的每个反向链路信号被在特定BS270内进行处理。获得的数据被转发给相关的BSC275。BSC提供通话资源分配和包括BS270之间的软切换过程的协调的移动管理功能。BSC275还将接收到的数据路由到MSC280,其提供用于与PSTN290形成接口的额外的路由服务。类似地,PSTN290与MSC280形成接口,MSC与BSC275形成接口,并且BSC275相应地控制BS270以将正向链路信号发送到移动终端100。
基于上述移动终端硬件结构以及通信系统,提出本发明方法各个实施例。
图4为本发明实施例处理图像的移动终端的结构示意图,该移动终端包括双目摄像头,如图4所示,该移动终端包括:获取模块40、检测模块41、分离模块42、处理模块43和合成模块44。其中,
获取模块40,设置为获取移动终端中双目摄像头捕获的,并且需要预览取景的原始图像。
检测模块41,设置为对原始图像进行检测,当检测到所述原始图像中有人物时,向分离模块42发送分离通知。
可选地,检测模块41,还设置为当监测到用户按压拍摄按钮时,向处理模块43发送存储通知。
分离模块42,设置为接收到分离通知后,从上述原始图像中分离出原始图像中的人物。
其中,分离模块42,是设置为:
确定原始图像中的人物的人体边缘轮廓;
截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
其中,所述分离模块是设置为通过如下方式实现确定所述原始图像中的人物的人体边缘轮廓:
通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。
其中,所述分离模块是设置为通过如下方式实现通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓:
通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
处理模块43,设置为对分离出的原始图像中的人物进行特效处理。
可选地,处理模块43,还设置为接收到存储通知后,存储添加有特效处理后的人物的原始图像。
合成模块44,设置为将进行特效处理后的人物合成至原始图像中。
其中,合成模块44包括:获取单元441、记录单元442、确定单元443和处理单元444;如图5所示,其中,
获取单元441,设置为获取需要预览取景的原始图像的深度信息;
记录单元442,设置为记录添加在已分离出人物的原始图像中特效处理后的人物的添加位置;
确定单元443,设置为根据获得的需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的添加位置确定特效处理后的人物在已分离出人物的原始图像中的大小;
处理单元444,设置为按照确定出的特效处理后的人物的大小调整特效处理后的人物。
可选地,该移动终端还包括提示模块45,设置为提示用户选择特效处理效果,向获取模块40发送获取通知。
可选地,获取模块40,还设置为接收到获取通知后,获取用户选择的特 效处理效果。
其中,特效处理效果包括:蜡像效果,和/或蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
可选地,该移动终端还包括显示模块46,设置为实时显示添加有特效处理后的人物的原始图像。
图6为本发明实施例处理图像的方法的流程图,应用于内置有双目摄像头的移动终端中,当启动双目摄像头进行拍摄时,如图6所示,包括:
步骤501:获取双目摄像头捕获的,并且需要预览取景的原始图像。
其中,可以通过移动终端中内置的双目摄像头获取需要预览取景的原始图像。其中,原始图像为一副包含深度信息的图像。
步骤502:对原始图像进行检测,当检测到所述原始图像中有人物时,从原始图像中分离出原始图像中的人物。
需要说明的是,关于如何检测获得的需要预览取景的原始图像中是否有人物,属于本领域技术人员所熟知的惯用技术手段,在此不再赘述。比如,可以通过人脸识别检测获得的需要预览取景的原始图像中是否有人物。
如图7(a)所示,S1即为分离出的原始图像中的人物,除S1外的其它区域即为分离后的背景区域(或者称之为已分离出人物的原始图像)。
其中,从原始图像中分离出原始图像中的人物,包括:
确定原始图像中的人物的人体边缘轮廓;
截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
其中,可以通过经典的形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。比如,通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
需要说明的是,关于如何通过双目拍摄模式获取需要预览取景的原始图 像的每个像素点的深度信息属于本领域技术人员所熟知的惯用技术手段,在此不再赘述,并不用来限制本发明。
步骤503:对分离出的原始图像中的人物进行特效处理。
步骤504:将进行特效处理后的人物合成至原始图像中。
其中,本步骤包括:
获取需要预览取景的原始图像的深度信息;
记录添加在已分离出人物的原始图像中特效处理后的人物的添加位置;
根据获得的需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的添加位置确定特效处理后的人物在已分离出人物的原始图像中的大小;
按照确定出的特效处理后的人物的大小调整特效处理后的人物。
如图7(b)所示,S1为分离出的原始图像中的人物,S1’为经过特效处理后的S1。
可选地,在从原始图像中分离出原始图像中的人物之后,在对分离出的原始图像中的人物进行特效处理之前,该方法还包括:
提示用户选择特效处理效果;
获取用户选择的特效处理效果。其中,特效处理效果包括:蜡像效果,或者蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
可选地,将进行特效处理后的人物合成至原始图像中之后还包括:
实时显示添加有特效处理后的人物的原始图像。
可选的,所述方法还包括:
对分离出的原始图像中的人物进行特效处理后,
当监测到用户按压拍摄按钮时,存储添加有特效处理后的人物的原始图像。
本发明实施例中,通过对分离出的图片中的人物进行特效处理并将进行特效处理后的人物添加至已分离出人物的原始图像中,实现了自动、快速地 合成经过特效处理的图像,增强了用户的体验感。
需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者装置不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者装置所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括该要素的过程、方法、物品或者装置中还存在另外的相同要素。
上述本发明实施例序号仅仅为了描述,不代表实施例的优劣。
通过以上的实施方式的描述,本领域的技术人员可以清楚地了解到上述实施例方法可借助软件加必需的通用硬件平台的方式来实现,当然也可以通过硬件,但很多情况下前者是更佳的实施方式。基于这样的理解,本发明的技术方案本质上或者说对现有技术做出贡献的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质(如ROM/RAM、磁碟、光盘)中,包括若干指令用以使得一台终端设备(可以是手机,计算机,服务器,空调器,或者网络设备等)执行本发明每个实施例所述的方法。
以上仅为本发明的可选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。
工业实用性
上述技术方案实现了自动、快速地合成经过特效处理的图像,增强了用户的体验感。

Claims (20)

  1. 一种处理图像的移动终端,包括:获取模块、检测模块、分离模块、处理模块和合成模块;其中,
    获取模块,设置为获取移动终端中双目摄像头捕获的,并且需要预览取景的原始图像;
    检测模块,设置为对所述原始图像进行检测,当检测到所述原始图像中有人物时,向分离模块发送分离通知;
    分离模块,设置为接收到分离通知后,从所述原始图像中分离出所述原始图像中的人物;
    处理模块,设置为对分离出的所述原始图像中的人物进行特效处理;
    合成模块,设置为将进行特效处理后的人物合成至所述原始图像中。
  2. 根据权利要求1所述的移动终端,其中,所述分离模块,是设置为通过如下方式实现从所述原始图像中分离出所述原始图像中的人物:
    确定所述原始图像中的人物的人体边缘轮廓;
    截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
    其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
  3. 根据权利要求1或2所述的移动终端,其中,所述合成模块包括:获取单元、记录单元、确定单元和处理单元;其中,
    获取单元,设置为获取所述需要预览取景的原始图像的深度信息;
    记录单元,设置为记录添加在所述已分离出人物的原始图像中特效处理后的人物的添加位置;
    确定单元,设置为根据获得的所述需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的所述添加位置确定特效处理后的人物在所述已分离出人物的原始图像中的大小;
    处理单元,设置为按照确定出的特效处理后的人物的大小调整特效处理后的人物。
  4. 根据权利要求1所述的移动终端,该移动终端还包括:
    提示模块,设置为提示用户选择特效处理效果,向所述获取模块发送获取通知;
    所述获取模块,还设置为接收到获取通知后,获取用户选择的特效处理效果。
  5. 根据权利要求4所述的移动终端,其中,所述特效处理效果包括:蜡像效果,和/或蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
  6. 根据权利要求1所述的移动终端,
    所述检测模块,还设置为当监测到用户按压拍摄按钮时,向处理模块发送存储通知;
    所述处理模块,还设置为接收到存储通知后,存储添加有特效处理后的人物的原始图像。
  7. 根据权利要求1所述的移动终端,还包括:
    显示模块,设置为实时显示添加有特效处理后的人物的原始图像。
  8. 根据权利要求2所述的移动终端,其中,所述分离模块是设置为通过如下方式实现确定所述原始图像中的人物的人体边缘轮廓:
    通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。
  9. 根据权利要求8所述的移动终端,其中,所述分离模块是设置为通过如下方式实现通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓:
    通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
  10. 根据权利要求1所述的方法,其中,
    所述原始图像为一副包含深度信息的图像。
  11. 一种处理图像的方法,应用于内置有双目摄像头的移动终端中,所述方法包括:
    当启动所述双目摄像头进行拍摄时,获取所述双目摄像头捕获的,并且需要预览取景的原始图像;
    对所述原始图像进行检测,当检测到所述原始图像中有人物时,从所述原始图像中分离出所述原始图像中的人物;
    对分离出的所述原始图像中的人物进行特效处理;
    将进行特效处理后的人物合成至所述原始图像中。
  12. 根据权利要求11所述的方法,其中,所述从所述原始图像中分离出所述原始图像中的人物,包括:
    确定所述原始图像中的人物的人体边缘轮廓;
    截取确定出的人体边缘轮廓以及人体边缘轮廓内的人物;
    其中,截取的人物为分离出的需要预览取景的原始图像中的人物。
  13. 根据权利要求11或12所述的方法,其中,所述将进行特效处理后的人物合成至所述原始图像中,包括:
    获取所述需要预览取景的原始图像的深度信息;
    记录添加在所述已分离出人物的原始图像中特效处理后的人物的添加位置;
    根据获得的所述需要预览取景的原始图像的深度信息以及记录的特效处理后的人物的所述添加位置确定特效处理后的人物在所述已分离出人物的原始图像中的大小;
    按照确定出的特效处理后的人物的大小调整特效处理后的人物。
  14. 根据权利要求11所述的方法,该方法还包括:
    在所述从所述原始图像中分离出所述原始图像中的人物之后,在所述对分离出的所述原始图像中的人物进行特效处理之前,
    提示用户选择特效处理效果;
    获取用户选择的特效处理效果。
  15. 根据权利要求14所述的方法,其中,所述特效处理效果包括:蜡像效果,和/或蜡笔效果,和/或色调效果,和/或美颜效果,和/或格调效果,和/或艺术效果。
  16. 根据权利要求11所述的方法,所述方法还包括:
    当监测到用户按压拍摄按钮时,存储添加有特效处理后的人物的原始图像。
  17. 根据权利要求11所述的方法,所述方法还包括:
    实时显示添加有特效处理后的人物的原始图像。
  18. 根据权利要求12所述的方法,其中,确定所述原始图像中的人物的人体边缘轮廓,包括:
    通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓。
  19. 根据权利要求18所述的方法,其中,通过形态学方法结合阈值分割来确定原始图像中的人物的人体边缘轮廓,包括:
    通过人脸识别获取人脸的深度信息;获取与获得的人脸的深度信息间隔在预设阈值范围内的每个像素点;确定获得的每个像素点中与人脸处于同一个连通区域的像素点;提取确定出的像素点与所述人脸位置中的像素点,以实现人物的分离。
  20. 根据权利要求11所述的方法,其中,
    所述原始图像为一副包含深度信息的图像。
PCT/CN2016/099235 2015-09-17 2016-09-18 一种处理图像的移动终端和方法 WO2017045647A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510595714.6A CN105187724B (zh) 2015-09-17 2015-09-17 一种处理图像的移动终端和方法
CN201510595714.6 2015-09-17

Publications (1)

Publication Number Publication Date
WO2017045647A1 true WO2017045647A1 (zh) 2017-03-23

Family

ID=54909550

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/099235 WO2017045647A1 (zh) 2015-09-17 2016-09-18 一种处理图像的移动终端和方法

Country Status (2)

Country Link
CN (1) CN105187724B (zh)
WO (1) WO2017045647A1 (zh)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612688A (zh) * 2020-05-27 2020-09-01 努比亚技术有限公司 一种图像处理方法、设备及计算机可读存储介质

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187724B (zh) * 2015-09-17 2019-07-19 努比亚技术有限公司 一种处理图像的移动终端和方法
CN106937043A (zh) * 2017-02-16 2017-07-07 奇酷互联网络科技(深圳)有限公司 移动终端及其图像处理的方法和装置
CN107770605A (zh) * 2017-09-25 2018-03-06 广东九联科技股份有限公司 一种人像图像特效实现方法及系统
CN107767333B (zh) * 2017-10-27 2021-08-10 努比亚技术有限公司 美颜拍照的方法、设备及计算机可存储介质
US10681310B2 (en) 2018-05-07 2020-06-09 Apple Inc. Modifying video streams with supplemental content for video conferencing
US11012389B2 (en) * 2018-05-07 2021-05-18 Apple Inc. Modifying images with supplemental content for messaging
CN110336940A (zh) * 2019-06-21 2019-10-15 深圳市茄子咔咔娱乐影像科技有限公司 一种基于双摄像头拍摄合成特效的方法和系统

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050134939A1 (en) * 2003-12-15 2005-06-23 Canon Kabushiki Kaisha Digital camera, image output method, and program
CN102638651A (zh) * 2012-04-09 2012-08-15 张文锋 一种相机用虚拟取景摄影装置
CN102932541A (zh) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 手机拍照方法及系统
CN103413270A (zh) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 一种图像的处理方法、装置和终端设备
CN103581561A (zh) * 2013-10-30 2014-02-12 广东欧珀移动通信有限公司 基于旋转摄像头拍摄的人景图像合成方法和系统
CN104394320A (zh) * 2014-11-26 2015-03-04 三星电子(中国)研发中心 处理图像的方法、装置以及电子设备
CN105187724A (zh) * 2015-09-17 2015-12-23 努比亚技术有限公司 一种处理图像的移动终端和方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1885307A (zh) * 2005-06-20 2006-12-27 英华达(上海)电子有限公司 数码摄影图像中人脸区域的识别与处理结合的方法
CN101662694B (zh) * 2008-08-29 2013-01-30 华为终端有限公司 视频的呈现方法、发送、接收方法及装置和通信系统
CN101610421B (zh) * 2008-06-17 2011-12-21 华为终端有限公司 视频通讯方法、装置及系统
KR101792641B1 (ko) * 2011-10-07 2017-11-02 엘지전자 주식회사 이동 단말기 및 그의 아웃 포커싱 이미지 생성방법
CN103379256A (zh) * 2012-04-25 2013-10-30 华为终端有限公司 图像处理方法及装置
CN104917965A (zh) * 2015-05-28 2015-09-16 努比亚技术有限公司 拍摄方法及装置

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20050134939A1 (en) * 2003-12-15 2005-06-23 Canon Kabushiki Kaisha Digital camera, image output method, and program
CN102638651A (zh) * 2012-04-09 2012-08-15 张文锋 一种相机用虚拟取景摄影装置
CN102932541A (zh) * 2012-10-25 2013-02-13 广东欧珀移动通信有限公司 手机拍照方法及系统
CN103413270A (zh) * 2013-08-15 2013-11-27 北京小米科技有限责任公司 一种图像的处理方法、装置和终端设备
CN103581561A (zh) * 2013-10-30 2014-02-12 广东欧珀移动通信有限公司 基于旋转摄像头拍摄的人景图像合成方法和系统
CN104394320A (zh) * 2014-11-26 2015-03-04 三星电子(中国)研发中心 处理图像的方法、装置以及电子设备
CN105187724A (zh) * 2015-09-17 2015-12-23 努比亚技术有限公司 一种处理图像的移动终端和方法

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111612688A (zh) * 2020-05-27 2020-09-01 努比亚技术有限公司 一种图像处理方法、设备及计算机可读存储介质
CN111612688B (zh) * 2020-05-27 2024-04-23 努比亚技术有限公司 一种图像处理方法、设备及计算机可读存储介质

Also Published As

Publication number Publication date
CN105187724A (zh) 2015-12-23
CN105187724B (zh) 2019-07-19

Similar Documents

Publication Publication Date Title
WO2017071559A1 (zh) 图像处理装置及方法
WO2017045647A1 (zh) 一种处理图像的移动终端和方法
CN106454121B (zh) 双摄像头拍照方法及装置
CN106453924B (zh) 一种图像拍摄方法和装置
WO2017050115A1 (zh) 一种图像合成方法和装置
WO2017067520A1 (zh) 具有双目摄像头的移动终端及其拍照方法
WO2018076938A1 (zh) 图像处理装置及方法和计算机存储介质
WO2018019124A1 (zh) 一种图像处理方法及电子设备、存储介质
WO2017107629A1 (zh) 移动终端、数据传输系统及移动终端拍摄方法
WO2017054704A1 (zh) 生成视频图片的方法及装置
CN104954689A (zh) 一种利用双摄像头获得照片的方法及拍摄装置
WO2018059206A1 (zh) 终端、获取视频的方法及存储介质
CN104767941A (zh) 拍照方法及装置
WO2017071475A1 (zh) 一种图像处理方法及终端、存储介质
WO2017071476A1 (zh) 一种图像合成方法和装置、存储介质
CN106534619A (zh) 一种调整对焦区域的方法、装置和终端
WO2017088662A1 (zh) 对焦方法和装置
WO2017071542A1 (zh) 图像处理方法及装置
CN105159594A (zh) 一种基于压力传感器的触摸拍照装置、方法及移动终端
WO2017067523A1 (zh) 图像处理方法、装置及移动终端
CN104660903A (zh) 拍摄方法及拍摄装置
CN106851113A (zh) 一种基于双摄像头的拍照方法及移动终端
WO2017054677A1 (zh) 移动终端拍摄系统和移动终端拍摄方法
CN106303229A (zh) 一种拍照方法及装置
WO2017185778A1 (zh) 一种移动终端及其曝光方法、计算机存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16845761

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16845761

Country of ref document: EP

Kind code of ref document: A1