CN106651762A - Photo processing method, device and terminal - Google Patents
Photo processing method, device and terminal Download PDFInfo
- Publication number
- CN106651762A CN106651762A CN201611225632.3A CN201611225632A CN106651762A CN 106651762 A CN106651762 A CN 106651762A CN 201611225632 A CN201611225632 A CN 201611225632A CN 106651762 A CN106651762 A CN 106651762A
- Authority
- CN
- China
- Prior art keywords
- camera
- depth
- target person
- information
- photo
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 10
- 238000000034 method Methods 0.000 claims abstract description 33
- 238000012545 processing Methods 0.000 claims abstract description 11
- 238000003709 image segmentation Methods 0.000 claims description 11
- 238000005259 measurement Methods 0.000 claims description 9
- 230000015572 biosynthetic process Effects 0.000 claims description 3
- 238000003786 synthesis reaction Methods 0.000 claims description 2
- 238000005516 engineering process Methods 0.000 abstract description 8
- 238000004891 communication Methods 0.000 description 23
- 238000010586 diagram Methods 0.000 description 17
- 230000015654 memory Effects 0.000 description 14
- 238000003384 imaging method Methods 0.000 description 8
- 230000006870 function Effects 0.000 description 7
- 238000010295 mobile communication Methods 0.000 description 7
- 230000005236 sound signal Effects 0.000 description 4
- 230000009977 dual effect Effects 0.000 description 3
- 238000007726 management method Methods 0.000 description 3
- 230000000712 assembly Effects 0.000 description 2
- 238000000429 assembly Methods 0.000 description 2
- 230000005540 biological transmission Effects 0.000 description 2
- 230000001413 cellular effect Effects 0.000 description 2
- 238000010276 construction Methods 0.000 description 2
- 230000005611 electricity Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- 230000001629 suppression Effects 0.000 description 2
- 230000002123 temporal effect Effects 0.000 description 2
- 238000012546 transfer Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 241000256844 Apis mellifera Species 0.000 description 1
- 241001062009 Indigofera Species 0.000 description 1
- 230000001133 acceleration Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000012937 correction Methods 0.000 description 1
- 238000005314 correlation function Methods 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000018109 developmental process Effects 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 239000009730 ganji Substances 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 230000007774 longterm Effects 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 239000002184 metal Substances 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 230000000877 morphologic effect Effects 0.000 description 1
- VYMDGNCVAMGZFE-UHFFFAOYSA-N phenylbutazonum Chemical compound O=C1C(CCCC)C(=O)N(C=2C=CC=CC=2)N1C1=CC=CC=C1 VYMDGNCVAMGZFE-UHFFFAOYSA-N 0.000 description 1
- 238000013468 resource allocation Methods 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000000638 solvent extraction Methods 0.000 description 1
- 238000001228 spectrum Methods 0.000 description 1
- 239000010409 thin film Substances 0.000 description 1
- 230000002463 transducing effect Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/04—Context-preserving transformations, e.g. by using an importance map
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Telephone Function (AREA)
Abstract
The invention discloses a photo processing method. The method comprises the following steps: acquiring depth of field information of a target figure in a photo shot by a first camera and a second camera, wherein the first camera is used for measuring the depth of field of a shot object, the second camera is used for shooting the object, and the photo is synthesized by the information shot by the first camera and the second camera; performing background filling on a position from which the target figure is removed according to the depth of field information after the recognized target figure is removed. The invention further discloses a photo processing device and terminal. The problem that the filling at the blank part from which the target figure is removed is greatly different from the background in the related technology is solved, the blank part is filled through the depth of field information, so that the filled photo is more real, and the user experience is improved.
Description
Technical field
The present invention relates to field of terminal technology, more particularly to a kind of photo processing method, device and terminal.
Background technology
With the development and the popularization of intelligent mobile terminal of mobile Internet, the customer group of intelligent mobile terminal is increasingly
Greatly, while also proposing more intelligence, the demand of hommization to software.
In existing technology, intelligent mobile terminal in fact, although by user as a game machine or television set, may be used also
Can be a learning machine, it is also possible to become paradise of baby etc., to our life more enjoyment are brought.
As user gradually increases dependent to mobile terminal, user's application in the terminal also increasingly increases
Many, taking pictures for current mobile terminal can be taken pictures using twin-lens, camera be used for measurement shoot object the depth of field,
Another camera is used to shoot object, then synthesizes a photo to the information captured by two cameras.
In currently for the photo shot by dual camera, when removing uncorrelated personnel in photo if desired, currently
Object after removing is filled by the color of the object periphery in photo.Due to no positional information in photo, it is impossible to make
It is filled with the real color of object periphery, therefore the filling effect of the blank parts stayed after being removed for target person
It is bad, differ larger with background.
The filling of blank parts, differs larger with background and asks after for being removed for target person in correlation technique
Topic, not yet proposes at present solution.
The content of the invention
Present invention is primarily targeted at proposing a kind of photo processing method, device and terminal, it is intended to solve correlation technique
In for target person be removed after blank parts filling, larger problem is differed with background.
For achieving the above object, the invention provides a kind of photo processing method, including:
The depth of view information of the target person in the photo shot by the first camera and second camera is obtained, wherein,
First camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object, the photo be by
The information synthesis that first camera and the second camera shoot;
After the target person that will identify that is removed, after being removed to the target person according to the depth of view information
Position carry out background filling.
Further, the depth of view information of the target person in the photo that the first camera and second camera shoot is obtained
Before, methods described also includes:To by the target person in first camera and the photo of second camera shooting
Thing is identified.
Further, to being known by the target person in the first camera and the photo of second camera shooting
Do not include:Image segmentation is carried out with reference to depth image and original image, the target person and target context is isolated.
Further, the depth of field letter of the target person in the photo shot by the first camera and second camera is obtained
Breath includes:The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
Further, the position after being removed to the target person according to the depth of view information carries out background filling bag
Include:The positional information of the target person is determined according to the depth of view information;According to the positional information of the target person to institute
Stating the position after target person is removed carries out background filling.
Further, the position after being removed to the target person according to the depth of view information carries out background filling bag
Include:The position letter of the target person that the positional information shot by first camera is shot with the second camera
Position after breath identical target context is removed to the target person carries out background filling.
According to a further aspect in the invention, there is provided a kind of picture processing device, including:
Acquisition module, for the depth of field letter of the target person in the photo for obtaining the first camera and second camera shooting
Breath, wherein, first camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object, described
Photo is that the information shot by first camera and the second camera synthesizes;
Background fills module, after removing in the target person that will identify that, according to the depth of view information to institute
Stating the position after target person is removed carries out background filling.
Further, described device also includes:Identification module, for obtaining the first camera and second camera shooting
Photo in target person depth of view information before, to the photograph shot by first camera and the second camera
Target person in piece is identified.
Further, the identification module, is additionally operable to carry out image segmentation with reference to depth image and original image, isolates
The target person and target context.
Further, the acquisition module, is additionally operable to obtain the depth of scene by binocular range finding platform or depth transducer
Degree information.
Further, the background filling module includes:
Determining unit, for determining the positional information of the target person according to the depth of view information;
Background fills unit, for the position after being removed to the target person according to the positional information of the target person
Putting carries out background filling.
One of according to a further aspect in the invention, a kind of terminal is additionally provided, including said apparatus.
By the present invention, the depth of field of the target person in the photo shot by the first camera and second camera is obtained
Information;After the target person that will identify that is removed, after being removed to the target person according to the depth of view information
Position carries out background filling, the filling of blank parts after being removed for target person in correlation technique is solved, with background phase
Blank parts are filled by the larger problem of difference by depth of view information so that truer after filling, improve Consumer's Experience.
Description of the drawings
Fig. 1 is the hardware architecture diagram of the mobile terminal for realizing each embodiment of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow chart of photo processing method according to embodiments of the present invention;
Fig. 4 is the schematic diagram that personage according to embodiments of the present invention removes rear backdrop filling;
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to embodiments of the present invention;
Fig. 6 is the schematic diagram one of binocular range finding general principle according to embodiments of the present invention;
Fig. 7 is the schematic diagram two of binocular range finding general principle according to embodiments of the present invention;
Fig. 8 is the schematic diagram three of binocular range finding general principle according to embodiments of the present invention;
Fig. 9 is the block diagram of picture processing device according to embodiments of the present invention;
Figure 10 is the block diagram of picture processing device according to the preferred embodiment of the invention.
The realization of the object of the invention, functional characteristics and advantage will be described further referring to the drawings in conjunction with the embodiments.
Specific embodiment
It should be appreciated that specific embodiment described herein is not intended to limit the present invention only to explain the present invention.
The mobile terminal of each embodiment of the invention is realized referring now to Description of Drawings.In follow-up description, use
For represent element such as " module ", " part " or " unit " suffix only for be conducive to the present invention explanation, itself
Not specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving
Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP
The mobile terminal of (portable media player), guider etc. and such as numeral TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that, except being used in particular for movement
Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is the hardware architecture diagram of the mobile terminal for realizing each embodiment of the invention.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190
Etc..
Fig. 1 shows the mobile terminal 1 00 with various assemblies, it should be understood that being not required for implementing all showing
The component for going out.More or less of component can alternatively be implemented.Will be discussed in more detail below the element of mobile terminal 1 00.
Wireless communication unit 110 can generally include one or more assemblies, and it allows mobile terminal 1 00 and radio communication
Radio communication between system or network.For example, wireless communication unit 110 can include that broadcasting reception module 111, movement are logical
At least one of letter module 112, wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server
Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent
The broadcast singal generated before the server or reception of broadcast singal and/or broadcast related information and/or broadcast related information
And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting
Signal etc..And, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Pass information can also be provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 is receiving.Broadcast singal can be present in a variety of manners, and for example, it can be with the electronics of DMB (DMB)
The form of program guide (EPG), the electronic service guidebooks (ESG) of digital video broadcast-handheld (DVB-H) etc. and exist.Broadcast
Receiver module 111 can receive signal broadcast by using various types of broadcast systems.Especially, broadcasting reception module 111
Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video
Broadcast-hand-held (DVB-H), Radio Data System, the received terrestrial digital broadcasting integrated service of forward link media (MediaFLO@)
Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111 may be constructed such that and be adapted to provide for extensively
Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via broadcasting reception module 111 receive broadcast singal and/
Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice
Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant
Tooth TM, RF identification (RFID), Infrared Data Association (IrDA), ultra broadband (UWB), purple honeybee TM etc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module
115 typical case is GPS (global positioning system).According to current technology, GPS calculate from three or more satellites away from
From information and correct time information and for the Information application triangulation for calculating, so as to according to longitude, latitude and height
Degree calculates exactly three-dimensional current location information.Currently, the method for calculating position and temporal information uses three satellites simultaneously
And the error of the position and temporal information for calculating is corrected by using an other satellite.Additionally, GPS can be by real-time
Ground Continuous plus current location information carrys out calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121
Microphone 122, the static images that 121 pairs, camera is obtained in Video Capture pattern or image capture mode by image capture apparatus
Or the view data of video is processed.Picture frame after process may be displayed on display unit 151.Jing cameras 121 are processed
Picture frame afterwards can be stored in memory 160 (or other storage mediums) or carry out sending out via wireless communication unit 110
Send, two or more cameras 121 can be provided according to the construction of mobile terminal 1 00.Microphone 122 can be in telephone relation mould
Sound (voice data), and energy are received in formula, logging mode, speech recognition mode etc. operational mode via microphone 122
Enough is voice data by such acoustic processing.Audio frequency (voice) data after process can be in the case of telephone calling model
Being converted to can be sent to the form output of mobile communication base station via mobile communication module 112.Microphone 122 can be implemented various
The noise of type eliminates (or suppression) algorithm and is being received and making an uproar of producing during sending audio signal with eliminating (or suppression)
Sound or interference.
User input unit 130 can generate key input data to control mobile terminal 1 00 according to the order of user input
Various operations.User input unit 130 allows the various types of information of user input, and can include keyboard, metal dome,
Touch pad (for example, detection is due to the sensitive component of the change of touched and caused resistance, pressure, electric capacity etc.), roller, shake
Bar etc..Especially, when touch pad is superimposed upon in the form of layer on display unit 151, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 1 00, and (for example, mobile terminal 1 00 opens or closes shape
State), the presence or absence of contact (that is, touch input), the mobile terminal of the position of mobile terminal 1 00, user for mobile terminal 1 00
100 orientation, the acceleration or deceleration movement of mobile terminal 1 00 and direction etc., and generate for controlling mobile terminal 1 00
The order of operation or signal.For example, when mobile terminal 1 00 is embodied as sliding-type mobile phone, sensing unit 140 can be sensed
The sliding-type phone is opened or closed.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).Sensing unit 140 can include proximity transducer 141.
Interface unit 170 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for device of the connection with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Identification module can be that storage uses each of mobile terminal 1 00 for verifying user
Kind of information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included
Etc..In addition, the device (hereinafter referred to as " identifying device ") with identification module can take the form of smart card, therefore, know
Other device can be connected via port or other attachment means with mobile terminal 1 00.Interface unit 170 can be used for receive from
The input (for example, data message, electric power etc.) of external device (ED) and the input for receiving is transferred in mobile terminal 1 00
One or more elements can be used for the transmission data between mobile terminal 1 00 and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 170 can serve as allowing to pass through it by electricity
Power from base provide to mobile terminal 1 00 path or can serve as allow from base be input into various command signals pass through its
It is transferred to the path of mobile terminal 1 00.Can serve as recognizing mobile terminal 1 00 from the various command signals or electric power of base input
The signal whether being accurately fitted within base.Output unit 150 is configured to the offer of vision, audio frequency and/or tactile manner
Output signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include aobvious
Show unit 151, dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity
During words call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia files
Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern
Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, illustrate video or figure
UI or GUI of picture and correlation function etc..
Meanwhile, when the display unit 151 and touch pad touch-screen with formation superposed on one another in the form of layer, display unit
151 can serve as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user from outside viewing, and this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
The embodiment wanted, mobile terminal 1 00 can include two or more display units (or other display devices), for example, move
Dynamic terminal 100 can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect
Touch input pressure and touch input position and touch input area.
Dio Output Modules 152 can be in mobile terminal 1 00 in call signal reception pattern, call mode, record mould
It is that wireless communication unit 110 is received or in memory when under the isotypes such as formula, speech recognition mode, broadcast reception mode
In 160 store voice data transducing audio signal and be output as sound.And, dio Output Modules 152 can provide with
(for example, call signal receives sound, message sink sound etc. to the audio output of the specific function correlation that mobile terminal 1 00 is performed
Deng).Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output so that event is notified to mobile terminal 1 00.Typical event can be with
Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video is exported, alarm unit
153 can in a different manner provide output with the generation of notification event.For example, alarm unit 153 can be in the form of vibrating
Output is provided, when calling, message or some other entrance communication (incoming communication) are received, alarm list
Unit 153 can provide tactile output (that is, vibrating) to notify to user.By providing such tactile output, even if
When the mobile phone of user is in the pocket of user, user also can recognize that the generation of various events.Alarm unit 153
The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
Memory 160 can store software program for the process and control operation performed by controller 180 etc., Huo Zheke
With the data (for example, telephone directory, message, still image, video etc.) for temporarily storing own Jing outputs or will export.And
And, memory 160 can be storing the vibration of various modes with regard to exporting when touching and being applied to touch-screen and audio signal
Data.
Memory 160 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, many
Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can perform memory with by network connection
The network storage device cooperation of 160 store function.
The overall operation of the generally control mobile terminal of controller 180.For example, controller 180 is performed and voice call, data
The related control of communication, video calling etc. and process.In addition, controller 180 can be included for reproducing (or playback) many matchmakers
The multi-media module 181 of volume data, multi-media module 181 can be constructed in controller 180, or is so structured that and control
Device 180 is separated.Controller 180 can be with execution pattern identifying processing, by the handwriting input for performing on the touchscreen or picture
Draw input and be identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each unit
Appropriate electric power needed for part and component.
Various embodiments described herein can be with using such as computer software, hardware or its any combination of calculating
Machine computer-readable recording medium is implementing.For hardware is implemented, embodiment described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein
Implementing, in some cases, such embodiment can be implemented at least one in electronic unit in controller 180.
For software is implemented, the embodiment of such as process or function can with allow to perform the single of at least one function or operation
Software module is implementing.Software code can be come by the software application (or program) write with any appropriate programming language
Implement, software code can be stored in memory 160 and be performed by controller 180.
So far, own Jing describes mobile terminal 1 00 according to its function.In addition, the mobile terminal 1 00 in the embodiment of the present invention
Can be such as folded form, board-type, oscillating-type, sliding-type and other various types of mobile terminals, specifically not do herein
Limit.
As shown in Figure 1 mobile terminal 1 00 may be constructed such that using via frame or packet transmission data it is all if any
Line and wireless communication system and satellite-based communication system are operating.
The communication system that mobile terminal wherein of the invention is operable to is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system can include multiple intelligent terminals 100, multiple base stations (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC 280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC 280 is also structured to be formed with the BSC 275 that can be couple to base station 270 via back haul link and connects
Mouthful.If any one that back haul link can be in the interface that Ganji knows is constructing, the interface can include such as Europe mark
Quasi- high power capacity digital circuit/Unite States Standard high power capacity digital circuit (E1/T1), asynchronous transfer mode (ATM), procotol
(IP), point-to-point protocol (PPP), frame relay, high-bit-rate digital subscriber line road (HDSL), Asymmetrical Digital Subscriber Line (ADSL)
Or all kinds digital subscriber line (xDSL).It will be appreciated that as shown in Figure 2 system can include multiple BSC 275.
Each BS 270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction
Each subregion that line is covered is radially away from BS 270.Or, each subregion can be by for two of diversity reception or more
Multiple antennas are covered.Each BS 270 may be constructed such that the multiple frequency distribution of support, and each frequency distribution has specific frequency
Spectrum (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS 270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single BSC
275 and at least one BS 270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS 270 can be claimed
For multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system
100.As shown in Figure 1 broadcasting reception module 111 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT 295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals
At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful
Location information.Location information module 115 as shown in Figure 1 is (such as:GPS) it is generally configured to coordinate to obtain with satellite 300
The location information that must be wanted.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use can track mobile whole
Other technologies of the position at end.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb biography
It is defeated.
Used as a typical operation of wireless communication system, BS 270 receives the reverse strand from various mobile terminal 1s 00
Road signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of certain base station reception is anti-
Processed in specific BS 270 to link signal.The data of acquisition are forwarded to the BSC 275 of correlation.BSC provides logical
Words resource allocation and the mobile management function of the coordination including the soft switching process between BS 270.BSC 275 will also be received
Data be routed to MSC 280, its provide for PSTN 290 formed interface extra route service.Similarly, PSTN
290 form interfaces with MSC 280, and MSC and BSC 275 form interface, and BSC 275 correspondingly controls BS 270 with by forward direction
Link signal is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal, a kind of photo processing method is embodiments provided, Fig. 3 is according to the present invention
The flow chart of the photo processing method of embodiment, as shown in figure 3, the method is comprised the following steps:
Step S302, obtains the depth of field letter of the target person in the photo shot by the first camera and second camera
Breath, wherein, first camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object, described
Photo is that the information shot by first camera and the second camera synthesizes;
Step S304, after the target person that will identify that is removed, is moved according to the depth of view information to the target person
Position after removing carries out background filling.
By above-mentioned steps, the scape of the target person in the photo shot by the first camera and second camera is obtained
Deeply convince breath, wherein, first camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object,
The photo is that the information shot by first camera and the second camera synthesizes;In the target that will identify that
After personage removes, the position after being removed to the target person according to the depth of view information carries out background filling, solves related skill
The filling of blank parts, larger problem is differed with background, by depth of view information to sky after being removed for target person in art
Partly it is filled in vain so that truer after filling, improves Consumer's Experience.
Further, the depth of view information of the target person in the photo that the first camera and second camera shoot is obtained
Before, methods described also includes:To by the target person in first camera and the photo of second camera shooting
Thing is identified.
Fig. 4 is the schematic diagram that personage according to embodiments of the present invention removes rear backdrop filling, as shown in figure 4, shooting photo
Afterwards, when removing uncorrelated personnel in photo if desired, current object remove after be color by the object periphery in photo
It is filled.Navigated to behind the position of the object by depth of view information, after the object is removed, only use and the object is located
The periphery color of position identical (depth of field is identical) is filled, and judges the positional information of removing objects by depth of view information, then
It is filled using the color of the object same position.Position after being removed to the target person according to the depth of view information is carried out
Background filling includes:The positional information of the target person is determined according to the depth of view information, according to the positional information of the target person
Position after being removed to the target person carries out background filling.Further, according to the depth of view information to the target person quilt
Position after removing carries out background filling to be included:The positional information shot by first camera is shot with the second camera
The target person positional information identical target context to the target person be removed after position carry out background filling.
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to embodiments of the present invention, as shown in figure 5, stereoscopic imaging apparatus by
Two or more digital camera head compositions, these camera relative positions are fixed, can with different view gathered in synchronization
Image.11 and 12 is two digital camera heads, and 13 is the connection member of two cameras.11 and 12 are fixed on connection member 13
On.This imaging system can obtain two photos in synchronization, and this two photos transfers to subsequent module for processing, can be used for follow-up
Three-dimensional correction, Stereo matching, depth of field measurement.Institute in collective's photo is obtained by first camera and the second camera
The depth of view information for having target person includes:The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
Depth measuring module obtains the photo of the different visual angles that stereoscopic imaging apparatus shoot, to foreground part in two photos
Region generates depth map using stereoscopic measurement method.A kind of specific embodiment is given below.
Fig. 6 is the schematic diagram one of binocular range finding general principle according to embodiments of the present invention, as shown in fig. 6, binocular vision
It is simulation human vision principle, using the method for the passive perceived distance of computer.From two or more point one objects of observation,
Obtain the image under different visual angles.
P is certain point in physical space, and c1 and c2 is that two video cameras are watched from diverse location, and m and m ' is p in not homophase
Image space in machine.
According to the matching relationship of pixel between image, calculate the skew between pixel to obtain by principle of triangulation
The three-dimensional information of object.Fig. 7 is the schematic diagram two of binocular range finding general principle according to embodiments of the present invention, as shown in fig. 7, P
For certain point in space, Ol and Or respectively two camera centers in left and right, xl and xr is the imaging point of the right and left.
The parallax d=xl-xr of imaging points of the point P in left images, using below equation calculate P points apart from Z.
Wherein f is the focal length of two digital camera heads in stereoscopic imaging apparatus (it is assumed here that two camera focal lengths one
Sample), T is the spacing between two digital camera heads.
Stereo Matching Algorithm mainly gets up xl and xr Corresponding matchings.Fig. 8 is that binocular according to embodiments of the present invention is surveyed
Away from the schematic diagram three of general principle, as shown in figure 8, a point p in reference picture, scans in an other sub-picture, find out
A pixel q the most similar to p, reach matching similitude definition be:The local gray level window difference of two pixels
Value is minimum.
The depth of view information of object is obtained, it is possible to calculate the actual range between object and camera, object dimensional is big
It is little, actual range between 2 points;Depth transducer is then to reflect in the scene to obtain scene using actively launching infrared light
Range information.Target person in the photo that shot by first camera and the second camera is identified can
To include:Image segmentation is carried out with reference to depth image and original image, the target person and target context is isolated.
Due to the subject goal of scene it is different apart from the distance of video camera with background area, subject goal and background area
Depth value also can be different, and this provides a georeferencing feature, is conducive to figure for the separation of main body below and background image
As the accuracy of partitioning algorithm.
Traditional image segmentation algorithm is carried out in 2D planes, has lacked the space length feature of scene this important information,
Image segmentation algorithm is typically difficult the background and subject goal being precisely separating out in scene, using depth information of scene, with reference to biography
The epidemic algorithms of system such as figure cuts algorithm or meanshift algorithms and carries out main body and background image segmentation.
Image segmentation algorithm is obtained after different image-regions, in addition it is also necessary to enter the profile of image through morphological operation
Row is extracted, the filling of intra-zone cavity, it is ensured that the integrality in image segmentation region.
A kind of picture processing device is embodiments provided, Fig. 9 is photo disposal dress according to embodiments of the present invention
The block diagram put, as shown in figure 9, including:
Acquisition module 92, for obtaining by the target person in the first camera and the photo of second camera shooting
Depth of view information, wherein, first camera is used for the depth of field that measurement shoots object, and the second camera is used for subject
Body, the photo is that the information shot by first camera and the second camera synthesizes;
Background fills module 94, after removing in the target person that will identify that, according to the depth of view information to the mesh
Position after mark personage is removed carries out background filling.
Further, the device also includes:Identification module, for what is shot in the first camera of acquisition and second camera
Before the depth of view information of the target person in photo, to the photo shot by first camera and the second camera
In target person be identified.
Further, the identification module, is additionally operable to carry out image segmentation with reference to depth image and original image, isolates this
Target person and target context.
Further, the acquisition module 94, is additionally operable to obtain the depth of scene by binocular range finding platform or depth transducer
Degree information.
Figure 10 is the block diagram of picture processing device according to the preferred embodiment of the invention, as shown in Figure 10, the background filling
Module 96 includes:
Determining unit 102, for determining the positional information of the target person according to the depth of view information;
Background fills unit 104, for the position after being removed to the target person according to the positional information of the target person
Putting carries out background filling.
One of the embodiment of the present invention additionally provides a kind of terminal, including said apparatus.
By the embodiment of the present invention, the target person in the photo that shot by dual camera is identified;By this
Dual camera obtains the depth of view information of the target person in the photo;After the target person that will identify that is removed, according to this
Position after depth of view information is removed to the target person carries out background filling, solves and be directed in correlation technique target person quilt
The filling of rear blank parts is removed, larger problem is differed with background, blank parts are filled by depth of view information so that
It is truer after filling, improve Consumer's Experience.
It should be noted that herein, term " including ", "comprising" or its any other variant are intended to non-row
His property is included, so that a series of process, method, article or device including key elements not only include those key elements, and
And also include other key elements being not expressly set out, or also include for this process, method, article or device institute inherently
Key element.In the absence of more restrictions, the key element for being limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, do not represent the quality of embodiment.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on such understanding, technical scheme is substantially done to prior art in other words
Going out the part of contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used so that a station terminal equipment (can be mobile phone, computer takes
Business device, air-conditioner, or network equipment etc.) perform the method that each embodiment of the invention is somebody's turn to do.
Obviously, those skilled in the art should be understood that above-mentioned each module of the invention or each step can be with general
Computing device realizing, they can be concentrated on single computing device, or are distributed in multiple computing devices and are constituted
Network on, alternatively, they can be realized with the executable program code of computing device, it is thus possible to they are stored
Performed by computing device in the storage device, and in some cases, can be shown to perform different from order herein
The step of going out or describe, or they are fabricated to respectively each integrated circuit modules, or by the multiple modules in them or
Step is fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any specific hardware and software combination.
The preferred embodiments of the present invention are these are only, the scope of the claims of the present invention is not thereby limited, it is every using this
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of photo processing method, it is characterised in that include:
The depth of view information of the target person in the photo shot by the first camera and second camera is obtained, wherein, it is described
First camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object, and the photo is by described
The information synthesis that first camera and the second camera shoot;
Position after the target person that will identify that is removed, after being removed to the target person according to the depth of view information
Putting carries out background filling.
2. method according to claim 1, it is characterised in that obtaining the photograph that the first camera and second camera shoot
Before the depth of view information of the target person in piece, methods described also includes:
To being identified by the target person in first camera and the photo of second camera shooting.
3. method according to claim 2, it is characterised in that to by first camera and the second camera
Target person in the photo of shooting is identified including:
Image segmentation is carried out with reference to depth image and original image, the target person and target context is isolated.
4. method according to claim 3, it is characterised in that obtain what is shot by the first camera and second camera
The depth of view information of the target person in photo includes:
The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
5. method according to claim 4, it is characterised in that the target person is removed according to the depth of view information
Position afterwards carries out background filling to be included:
The positional information of the target person is determined according to the depth of view information;
Position after being removed to the target person according to the positional information of the target person carries out background filling.
6. method according to claim 5, it is characterised in that the target person is removed according to the depth of view information
Position afterwards carries out background filling to be included:
The position of the target person that the positional information shot by first camera is shot with the second camera
Position after information identical target context is removed to the target person carries out background filling.
7. a kind of picture processing device, it is characterised in that include:
Acquisition module, for obtaining the depth of field letter by the target person in the first camera and the photo of second camera shooting
Breath, wherein, first camera is used for the depth of field that measurement shoots object, and the second camera is used to shoot object, described
Photo is that the information shot by first camera and the second camera synthesizes;
Background fills module, after removing in the target person that will identify that, according to the depth of view information to the mesh
Position after mark personage is removed carries out background filling.
8. device according to claim 7, it is characterised in that the identification module, is additionally operable to
Image segmentation is carried out with reference to depth image and original image, the target person and target context is isolated.
9. device according to claim 8, it is characterised in that the acquisition module, is additionally operable to
The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
10. a kind of terminal, it is characterised in that including the device any one of claim 7 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611225632.3A CN106651762A (en) | 2016-12-27 | 2016-12-27 | Photo processing method, device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611225632.3A CN106651762A (en) | 2016-12-27 | 2016-12-27 | Photo processing method, device and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106651762A true CN106651762A (en) | 2017-05-10 |
Family
ID=58831850
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611225632.3A Pending CN106651762A (en) | 2016-12-27 | 2016-12-27 | Photo processing method, device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106651762A (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108037872A (en) * | 2017-11-29 | 2018-05-15 | 上海爱优威软件开发有限公司 | A kind of photo editing method and terminal device |
CN111247790A (en) * | 2019-02-21 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Image processing method and device, image shooting and processing system and carrier |
CN112083864A (en) * | 2020-09-18 | 2020-12-15 | 深圳铂睿智恒科技有限公司 | Method, device and equipment for processing object to be deleted |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104581111A (en) * | 2013-10-09 | 2015-04-29 | 奥多比公司 | Target region fill utilizing transformations |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN105763812A (en) * | 2016-03-31 | 2016-07-13 | 北京小米移动软件有限公司 | Intelligent photographing method and device |
CN106791119A (en) * | 2016-12-27 | 2017-05-31 | 努比亚技术有限公司 | A kind of photo processing method, device and terminal |
-
2016
- 2016-12-27 CN CN201611225632.3A patent/CN106651762A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104581111A (en) * | 2013-10-09 | 2015-04-29 | 奥多比公司 | Target region fill utilizing transformations |
CN105657394A (en) * | 2014-11-14 | 2016-06-08 | 东莞宇龙通信科技有限公司 | Photographing method based on double cameras, photographing device and mobile terminal |
CN105763812A (en) * | 2016-03-31 | 2016-07-13 | 北京小米移动软件有限公司 | Intelligent photographing method and device |
CN106791119A (en) * | 2016-12-27 | 2017-05-31 | 努比亚技术有限公司 | A kind of photo processing method, device and terminal |
Non-Patent Citations (2)
Title |
---|
吴黎: "人像照片背景替换方法研究", <<中国优秀博硕士学位论文全文数据库 (硕士信息科技辑)>> * |
李宏亮等: "《视频分割及其应用》", 30 April 2014, 国防工业出版社 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108037872A (en) * | 2017-11-29 | 2018-05-15 | 上海爱优威软件开发有限公司 | A kind of photo editing method and terminal device |
CN111247790A (en) * | 2019-02-21 | 2020-06-05 | 深圳市大疆创新科技有限公司 | Image processing method and device, image shooting and processing system and carrier |
WO2020168515A1 (en) * | 2019-02-21 | 2020-08-27 | 深圳市大疆创新科技有限公司 | Image processing method and apparatus, image capture processing system, and carrier |
CN112083864A (en) * | 2020-09-18 | 2020-12-15 | 深圳铂睿智恒科技有限公司 | Method, device and equipment for processing object to be deleted |
CN112083864B (en) * | 2020-09-18 | 2024-08-13 | 酷赛通信科技股份有限公司 | Method, device and equipment for processing object to be deleted |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104835165B (en) | Image processing method and image processing device | |
CN106878588A (en) | A kind of video background blurs terminal and method | |
CN106534590B (en) | A kind of photo processing method, device and terminal | |
CN105227837A (en) | A kind of image combining method and device | |
CN106651867A (en) | Interactive image segmentation method and apparatus, and terminal | |
CN106331499A (en) | Focusing method and shooting equipment | |
CN106878949A (en) | A kind of positioning terminal based on dual camera, system and method | |
CN106846345A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106791111A (en) | A kind of images share method, device and terminal | |
CN106791119A (en) | A kind of photo processing method, device and terminal | |
CN106569678A (en) | Display adjusting method and device of suspending operation board and terminal | |
CN106850941A (en) | Method, photo taking and device | |
CN106851125A (en) | A kind of mobile terminal and multiple-exposure image pickup method | |
CN106898003A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106886999A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106651762A (en) | Photo processing method, device and terminal | |
CN106846323A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN104935822A (en) | Method and device for processing images | |
CN106651773A (en) | Picture processing method and device | |
CN106780516A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN107018326A (en) | A kind of image pickup method and device | |
CN106898005A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106875399A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106875347A (en) | A kind of picture processing device and method | |
CN106646442A (en) | Distance measurement method and terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170510 |