CN106791119A - A kind of photo processing method, device and terminal - Google Patents
A kind of photo processing method, device and terminal Download PDFInfo
- Publication number
- CN106791119A CN106791119A CN201611225193.6A CN201611225193A CN106791119A CN 106791119 A CN106791119 A CN 106791119A CN 201611225193 A CN201611225193 A CN 201611225193A CN 106791119 A CN106791119 A CN 106791119A
- Authority
- CN
- China
- Prior art keywords
- camera
- photo
- information
- target persons
- collective
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04M—TELEPHONIC COMMUNICATION
- H04M1/00—Substation equipment, e.g. for use by subscribers
- H04M1/72—Mobile telephones; Cordless telephones, i.e. devices for establishing wireless links to base stations without route selection
- H04M1/724—User interfaces specially adapted for cordless or mobile telephones
- H04M1/72403—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality
- H04M1/7243—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages
- H04M1/72439—User interfaces specially adapted for cordless or mobile telephones with means for local support of applications that increase the functionality with interactive means for internal management of messages for image or video messaging
Landscapes
- Engineering & Computer Science (AREA)
- Human Computer Interaction (AREA)
- Computer Networks & Wireless Communication (AREA)
- Signal Processing (AREA)
- Multimedia (AREA)
- Business, Economics & Management (AREA)
- General Business, Economics & Management (AREA)
- Telephone Function (AREA)
- Mobile Radio Communication Systems (AREA)
Abstract
The invention discloses a kind of photo processing method, wherein, the method includes:All target persons in collective's photo for being shot by dual camera are identified, the depth of view information of all target persons in collective's photo is obtained by the dual camera, the positional information of all target persons in collective's photo is determined according to the depth of view information, and mark the positional information of all target persons, the invention also discloses a kind of picture processing device and terminal, collective's photo needs the information of user hand labeled personage relatively time-consuming in solving the problems, such as correlation technique, the information of automatic sign target person is realized, Consumer's Experience is improve.
Description
Technical field
The present invention relates to field of terminal technology, more particularly to a kind of photo processing method, device and terminal.
Background technology
With the development and the popularization of intelligent mobile terminal of mobile Internet, the customer group of intelligent mobile terminal is increasingly
Greatly, while also proposing more intelligence to software, the demand of hommization.
In existing technology, intelligent mobile terminal in fact, although by user as a game machine or television set, may be used also
Can be a learning machine, it is also possible to paradise as baby etc., more enjoyment are brought to our life.
As user is to the gradually increase to mobile terminal dependence, user's application in the terminal also increasingly increases
Many, taking pictures for current mobile terminal can be taken pictures using twin-lens, camera be used to measuring the depth of field for shooting object,
Another camera is used to shoot object, then synthesizes a photo to the information captured by two cameras.
Currently in the group picture shot by dual camera, it is necessary to the information of the manual mark personage of user, compares
It is time-consuming.
The relatively time-consuming problem of the information of user hand labeled personage is needed for collective's photo in correlation technique, at present still
Solution is not proposed.
The content of the invention
It is a primary object of the present invention to propose a kind of photo processing method, device and terminal, it is intended to solve correlation technique
Middle collective's photo needs the relatively time-consuming problem of the information of user hand labeled personage.
To achieve the above object, the invention provides a kind of photo processing method, including:
To being identified by all target persons in the first camera and collective's photo of second camera shooting, its
In, first camera is used to measure the depth of field for shooting object, and the second camera is used to shoot object, and the photo is
What the information shot by first camera and the second camera synthesized;
The scape of all target persons in collective's photo is obtained by first camera and the second camera
Deeply convince breath;
The positional information of all target persons in collective's photo is determined according to the depth of view information, and marks all mesh
Mark the positional information of personage.
Further, by all targets in first camera and second camera acquisition collective's photo
The depth of view information of personage includes:The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
Further, to by all target persons in the first camera and collective's photo of second camera shooting
After being identified, methods described also includes:The relevant information of all target persons is obtained from background data base, wherein, it is described
Relevant information includes:Name and sex.
Further, the positional information of all target persons of mark includes:Institute is shown in collective's photo acceptance of the bid
There are the position of target person and the relevant information.
Further, the positional information of all target persons of mark includes:The back side of collective's photo or under
Side indicates the position of all target persons and the relevant information.
According to another aspect of the present invention, there is provided a kind of picture processing device, including:
Identification module, for all target persons in the collective's photo to being shot by the first camera and second camera
It is identified, wherein, first camera is used to measure the depth of field for shooting object, and the second camera is used for subject
Body, the photo is that the information shot by first camera and the second camera synthesizes;
First acquisition module, for by first camera and second camera acquisition collective's photo
The depth of view information of all target persons;
Believe determining module, the position for determining all target persons in collective's photo according to the depth of view information
Breath, and mark the positional information of all target persons.
Further, first acquisition module, is additionally operable to
The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
Further, described device also includes:
Second acquisition module, the relevant information for obtaining all target persons from background data base, wherein, the correlation
Information includes:Name and sex.
Further, the determining module includes:
Sign unit, for the position that all target persons are shown in collective's photo acceptance of the bid and the related letter
Breath.
One of according to another aspect of the present invention, a kind of terminal is additionally provided, including said apparatus.
By the present invention, all target persons in collective's photo for being shot by dual camera are identified, by institute
The depth of view information that dual camera obtains all target persons in collective's photo is stated, the collection is determined according to the depth of view information
The positional information of all target persons in body photo, and mark the positional information of all target persons, in solving correlation technique
Collective's photo needs the relatively time-consuming problem of the information of user hand labeled personage, realizes the letter of automatic sign target person
Breath, improves Consumer's Experience.
Brief description of the drawings
Fig. 1 is the hardware architecture diagram of the mobile terminal for realizing each embodiment of the invention;
Fig. 2 is the wireless communication system schematic diagram of mobile terminal as shown in Figure 1;
Fig. 3 is the flow chart of photo processing method according to embodiments of the present invention;
Fig. 4 is the schematic diagram of target person sign in collective's photo according to embodiments of the present invention;
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to embodiments of the present invention;
Fig. 6 is the schematic diagram one of binocular range finding general principle according to embodiments of the present invention;
Fig. 7 is the schematic diagram two of binocular range finding general principle according to embodiments of the present invention;
Fig. 8 is the schematic diagram three of binocular range finding general principle according to embodiments of the present invention;
Fig. 9 is the block diagram of picture processing device according to embodiments of the present invention;
Figure 10 is the block diagram one of picture processing device according to the preferred embodiment of the invention;
Figure 11 is the block diagram two of picture processing device according to the preferred embodiment of the invention.
The realization of the object of the invention, functional characteristics and advantage will be described further referring to the drawings in conjunction with the embodiments.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
The mobile terminal of each embodiment of the invention is realized referring now to Description of Drawings.In follow-up description, use
For represent element such as " module ", " part " or " unit " suffix only for being conducive to explanation of the invention, itself
Not specific meaning.Therefore, " module " can be used mixedly with " part ".
Mobile terminal can be implemented in a variety of manners.For example, the terminal described in the present invention can include such as moving
Phone, smart phone, notebook computer, digit broadcasting receiver, PDA (personal digital assistant), PAD (panel computer), PMP
The mobile terminal of (portable media player), guider etc. and such as numeral TV, desktop computer etc. are consolidated
Determine terminal.Hereinafter it is assumed that terminal is mobile terminal.However, it will be understood by those skilled in the art that, except being used in particular for movement
Outside the element of purpose, construction according to the embodiment of the present invention can also apply to the terminal of fixed type.
Fig. 1 is the hardware architecture diagram of the mobile terminal for realizing each embodiment of the invention.
Mobile terminal 1 00 can include wireless communication unit 110, A/V (audio/video) input block 120, user input
Unit 130, sensing unit 140, output unit 150, memory 160, interface unit 170, controller 180 and power subsystem 190
Etc..
Fig. 1 shows the mobile terminal 1 00 with various assemblies, it should be understood that being not required for implementing all showing
The component for going out.More or less component can alternatively be implemented.The element of mobile terminal 1 00 will be discussed in more detail below.
Wireless communication unit 110 can generally include one or more assemblies, and it allows mobile terminal 1 00 and radio communication
Radio communication between system or network.For example, wireless communication unit 110 can include that broadcasting reception module 111, movement are logical
At least one of letter module 112, wireless Internet module 113, short range communication module 114 and location information module 115.
Broadcasting reception module 111 receives broadcast singal and/or broadcast via broadcast channel from external broadcast management server
Relevant information.Broadcast channel can include satellite channel and/or terrestrial channel.Broadcast management server can be generated and sent
The broadcast singal and/or broadcast related information generated before the server or reception of broadcast singal and/or broadcast related information
And send it to the server of terminal.Broadcast singal can include TV broadcast singals, radio signals, data broadcasting
Signal etc..And, broadcast singal may further include the broadcast singal combined with TV or radio signals.Broadcast phase
Pass information can also be provided via mobile communications network, and in this case, broadcast related information can be by mobile communication mould
Block 112 is received.Broadcast singal can exist in a variety of manners, for example, it can be with the electronics of DMB (DMB)
The form of program guide (EPG), the electronic service guidebooks (ESG) of digital video broadcast-handheld (DVB-H) etc. and exist.Broadcast
Receiver module 111 can receive signal and broadcast by using various types of broadcast systems.Especially, broadcasting reception module 111
Can be wide by using such as multimedia broadcasting-ground (DMB-T), DMB-satellite (DMB-S), digital video
Broadcast-hand-held (DVB-H), Radio Data System, the received terrestrial digital broadcasting integrated service of forward link media (MediaFLO@)
Etc. (ISDB-T) digit broadcasting system receives digital broadcasting.Broadcasting reception module 111 may be constructed such that and be adapted to provide for extensively
Broadcast the various broadcast systems and above-mentioned digit broadcasting system of signal.Via broadcasting reception module 111 receive broadcast singal and/
Or broadcast related information can be stored in memory 160 (or other types of storage medium).
Mobile communication module 112 sends radio signals to base station (for example, access point, node B etc.), exterior terminal
And at least one of server and/or receive from it radio signal.Such radio signal can be logical including voice
Words signal, video calling signal or the various types of data for sending and/or receiving according to text and/or Multimedia Message.
Wireless Internet module 113 supports the Wi-Fi (Wireless Internet Access) of mobile terminal.The module can be internally or externally
It is couple to terminal.Wi-Fi (Wireless Internet Access) technology involved by the module can include WLAN (WLAN) (Wi-Fi), Wibro
(WiMAX), Wimax (worldwide interoperability for microwave accesses), HSDPA (high-speed downlink packet access) etc..
Short range communication module 114 is the module for supporting junction service.Some examples of short-range communication technology include indigo plant
Tooth TM, radio frequency identification (RFID), Infrared Data Association (IrDA), ultra wide band (UWB), purple honeybee TM etc..
Location information module 115 is the module for checking or obtaining the positional information of mobile terminal.Location information module
115 typical case is GPS (global positioning system).According to current technology, GPS calculate from three or more satellites away from
Information application triangulation from information and correct time information and for calculating, so as to according to longitude, latitude and height
Degree calculates three-dimensional current location information exactly.Currently, three satellites are used simultaneously for calculating the method for position and temporal information
And the error of the position and temporal information for calculating is corrected by using an other satellite.Additionally, GPS can be by real-time
Ground Continuous plus current location information carrys out calculating speed information.
A/V input blocks 120 are used to receive audio or video signal.A/V input blocks 120 can include the He of camera 121
Microphone 122, the static images that 121 pairs, camera is obtained in Video Capture pattern or image capture mode by image capture apparatus
Or the view data of video is processed.Picture frame after treatment may be displayed on display unit 151.Processed through camera 121
Picture frame afterwards can be stored in memory 160 (or other storage mediums) or sent out via wireless communication unit 110
Send, two or more cameras 121 can be provided according to the construction of mobile terminal 1 00.Microphone 122 can be in telephone relation mould
In formula, logging mode, speech recognition mode etc. operational mode sound (voice data), and energy are received via microphone 122
Enough is voice data by such acoustic processing.Audio (voice) data after treatment can be in the case of telephone calling model
The form that being converted to can be sent to mobile communication base station via mobile communication module 112 is exported.Microphone 122 can be implemented various
The noise of type eliminates (or suppression) algorithm and is being received and making an uproar of producing during sending audio signal with eliminating (or suppression)
Sound or interference.
User input unit 130 can generate key input data to control mobile terminal 1 00 according to the order of user input
Various operations.User input unit 130 allow the various types of information of user input, and can include keyboard, metal dome,
Touch pad (for example, detection due to being touched caused by resistance, pressure, electric capacity etc. change sensitive component), roller, shake
Bar etc..Especially, when touch pad is superimposed upon on display unit 151 in the form of layer, touch-screen can be formed.
Sensing unit 140 detects the current state of mobile terminal 1 00, (for example, mobile terminal 1 00 opens or closes shape
State), the presence or absence of the contact (that is, touch input) of the position of mobile terminal 1 00, user for mobile terminal 1 00, mobile terminal
The acceleration or deceleration movement of 100 orientation, mobile terminal 1 00 and direction etc., and generate for controlling mobile terminal 1 00
The order of operation or signal.For example, when mobile terminal 1 00 is embodied as sliding-type mobile phone, sensing unit 140 can be sensed
The sliding-type phone is opened or closed.In addition, sensing unit 140 can detect power subsystem 190 whether provide electric power or
Whether person's interface unit 170 couples with external device (ED).Sensing unit 140 can include proximity transducer 141.
Interface unit 170 is connected the interface that can pass through with mobile terminal 1 00 as at least one external device (ED).For example,
External device (ED) can include wired or wireless head-band earphone port, external power source (or battery charger) port, wired or nothing
Line FPDP, memory card port, the port for connecting the device with identification module, audio input/output (I/O) end
Mouth, video i/o port, ear port etc..Identification module can be that storage uses each of mobile terminal 1 00 for verifying user
Kind of information and subscriber identification module (UIM), client identification module (SIM), Universal Subscriber identification module (USIM) can be included
Etc..In addition, the device (hereinafter referred to as " identifying device ") with identification module can take the form of smart card, therefore, know
Other device can be connected via port or other attachment means with mobile terminal 1 00.Interface unit 170 can be used for reception and come from
The input (for example, data message, electric power etc.) of the external device (ED) and input that will be received is transferred in mobile terminal 1 00
One or more elements can be used for transmitting data between mobile terminal 1 00 and external device (ED).
In addition, when mobile terminal 1 00 is connected with external base, interface unit 170 can serve as allowing by it by electricity
Power provides to the path of mobile terminal 1 00 from base or can serve as allowing the various command signals being input into from base to pass through it
It is transferred to the path of mobile terminal 1 00.Can serve as recognizing mobile terminal 1 00 from the various command signals or electric power of base input
Whether signal base on is accurately fitted within.Output unit 150 is configured to be provided with vision, audio and/or tactile manner
Output signal (for example, audio signal, vision signal, alarm signal, vibration signal etc.).Output unit 150 can include aobvious
Show unit 151, dio Output Modules 152, alarm unit 153 etc..
Display unit 151 may be displayed on the information processed in mobile terminal 1 00.For example, when mobile terminal 1 00 is in electricity
During words call mode, display unit 151 can show and converse or other communicate (for example, text messaging, multimedia file
Download etc.) related user interface (UI) or graphic user interface (GUI).When mobile terminal 1 00 is in video calling pattern
Or during image capture mode, display unit 151 can show the image of capture and/or the image of reception, show video or figure
UI or GUI of picture and correlation function etc..
Meanwhile, when display unit 151 and touch pad in the form of layer it is superposed on one another to form touch-screen when, display unit
151 can serve as input unit and output device.Display unit 151 can include liquid crystal display (LCD), thin film transistor (TFT)
In LCD (TFT-LCD), Organic Light Emitting Diode (OLED) display, flexible display, three-dimensional (3D) display etc. at least
It is a kind of.Some in these displays may be constructed such that transparence to allow user to be watched from outside, and this is properly termed as transparent
Display, typical transparent display can be, for example, TOLED (transparent organic light emitting diode) display etc..According to specific
Desired implementation method, mobile terminal 1 00 can include two or more display units (or other display devices), for example, moving
Dynamic terminal 100 can include outernal display unit (not shown) and inner display unit (not shown).Touch-screen can be used to detect
Touch input pressure and touch input position and touch input area.
Dio Output Modules 152 can be in call signal reception pattern, call mode, record mould in mobile terminal 1 00
It is that wireless communication unit 110 is received or in memory when under the isotypes such as formula, speech recognition mode, broadcast reception mode
In 160 store voice data transducing audio signal and be output as sound.And, dio Output Modules 152 can provide with
The audio output of the specific function correlation that mobile terminal 1 00 is performed is (for example, call signal receives sound, message sink sound etc.
Deng).Dio Output Modules 152 can include loudspeaker, buzzer etc..
Alarm unit 153 can provide output and be notified to mobile terminal 1 00 with by event.Typical event can be with
Including calling reception, message sink, key signals input, touch input etc..In addition to audio or video is exported, alarm unit
153 can in a different manner provide output with the generation of notification event.For example, alarm unit 153 can be in the form of vibrating
Output is provided, when calling, message or some other entrance communication (incoming communication) are received, alarm list
Unit 153 can provide tactile output (that is, vibrating) to notify to user.Exported by providing such tactile, even if
When in pocket of the mobile phone of user in user, user also can recognize that the generation of various events.Alarm unit 153
The output of the generation of notification event can be provided via display unit 151 or dio Output Modules 152.
Memory 160 can store software program for the treatment and control operation performed by controller 180 etc., Huo Zheke
Temporarily to store oneself data (for example, telephone directory, message, still image, video etc.) through exporting or will export.And
And, memory 160 can store the vibration of various modes on being exported when touching and being applied to touch-screen and audio signal
Data.
Memory 160 can include the storage medium of at least one type, and the storage medium includes flash memory, hard disk, many
Media card, card-type memory (for example, SD or DX memories etc.), random access storage device (RAM), static random-access storage
Device (SRAM), read-only storage (ROM), Electrically Erasable Read Only Memory (EEPROM), programmable read only memory
(PROM), magnetic storage, disk, CD etc..And, mobile terminal 1 00 can perform memory with by network connection
The network storage device cooperation of 160 store function.
The overall operation of the generally control mobile terminal of controller 180.For example, controller 180 is performed and voice call, data
Communication, video calling etc. related control and treatment.In addition, controller 180 can be included for reproducing (or playback) many matchmakers
The multi-media module 181 of volume data, multi-media module 181 can be constructed in controller 180, or can be structured as and control
Device 180 is separated.Controller 180 can be with execution pattern identifying processing, the handwriting input that will be performed on the touchscreen or picture
Draw input and be identified as character or image.
Power subsystem 190 receives external power or internal power under the control of controller 180 and provides operation each unit
Appropriate electric power needed for part and component.
Various implementation methods described herein can be with use such as computer software, hardware or its any combination of calculating
Machine computer-readable recording medium is implemented.Implement for hardware, implementation method described herein can be by using application-specific IC
(ASIC), digital signal processor (DSP), digital signal processing device (DSPD), programmable logic device (PLD), scene can
Programming gate array (FPGA), processor, controller, microcontroller, microprocessor, it is designed to perform function described herein
At least one in electronic unit is implemented, and in some cases, such implementation method can be implemented in controller 180.
For software implementation, the implementation method of such as process or function can with allow to perform the single of at least one function or operation
Software module is implemented.Software code can be come by the software application (or program) write with any appropriate programming language
Implement, software code can be stored in memory 160 and performed by controller 180.
So far, oneself according to its function through describing mobile terminal 1 00.In addition, the mobile terminal 1 00 in the embodiment of the present invention
Can be such as folded form, board-type, oscillating-type, sliding-type and other various types of mobile terminals, not do herein specifically
Limit.
Mobile terminal 1 00 as shown in Figure 1 may be constructed such that using via frame or packet transmission data it is all if any
Line and wireless communication system and satellite-based communication system are operated.
The communication system that mobile terminal wherein of the invention can be operated is described referring now to Fig. 2.
Such communication system can use different air interface and/or physical layer.For example, used by communication system
Air interface includes such as frequency division multiple access (FDMA), time division multiple acess (TDMA), CDMA (CDMA) and universal mobile communications system
System (UMTS) (especially, Long Term Evolution (LTE)), global system for mobile communications (GSM) etc..As non-limiting example, under
The description in face is related to cdma communication system, but such teaching is equally applicable to other types of system.
With reference to Fig. 2, cdma wireless communication system can include multiple intelligent terminals 100, multiple base station (BS) 270, base station
Controller (BSC) 275 and mobile switching centre (MSC) 280.MSC 280 is configured to and Public Switched Telephony Network (PSTN)
290 form interface.MSC 280 is also structured to be formed with the BSC 275 that can be couple to base station 270 via back haul link and connects
Mouthful.If any one in the interface that back haul link can be known according to Ganji is constructed, the interface can include such as Europe mark
Quasi- high power capacity digital circuit/Unite States Standard high power capacity digital circuit (E1/T1), asynchronous transfer mode (ATM), procotol
(IP), point-to-point protocol (PPP), frame relay, high-bit-rate digital subscriber line road (HDSL), Asymmetrical Digital Subscriber Line (ADSL)
Or all kinds digital subscriber line (xDSL).It will be appreciated that system can include multiple BSC 275 as shown in Figure 2.
Each BS 270 can service one or more subregions (or region), by multidirectional antenna or the day of sensing specific direction
Each subregion of line covering is radially away from BS 270.Or, each subregion can by two for diversity reception or more
Multiple antennas are covered.Each BS 270 may be constructed such that the multiple frequency distribution of support, and the distribution of each frequency has specific frequency
Spectrum (for example, 1.25MHz, 5MHz etc.).
What subregion and frequency were distributed intersects can be referred to as CDMA Channel.BS 270 can also be referred to as base station transceiver
System (BTS) or other equivalent terms.In this case, term " base station " can be used for broadly representing single BSC
275 and at least one BS 270.Base station can also be referred to as " cellular station ".Or, each subregion of specific BS 270 can be claimed
It is multiple cellular stations.
As shown in Figure 2, broadcast singal is sent to broadcsting transmitter (BT) 295 mobile terminal operated in system
100.Broadcasting reception module 111 as shown in Figure 1 is arranged at mobile terminal 1 00 to receive the broadcast sent by BT 295
Signal.In fig. 2 it is shown that several global positioning system (GPS) satellites 300.Satellite 300 helps position multiple mobile terminals
At least one of 100.
In fig. 2, multiple satellites 300 are depicted, it is understood that be, it is possible to use any number of satellite obtains useful
Location information.Location information module 115 as shown in Figure 1 is (such as:GPS) it is generally configured to coordinate to obtain with satellite 300
The location information that must be wanted.Substitute GPS tracking techniques or outside GPS tracking techniques, it is possible to use can track mobile whole
Other technologies of the position at end.In addition, at least one gps satellite 300 can optionally or additionally process satellite dmb biography
It is defeated.
Used as a typical operation of wireless communication system, BS 270 receives the reverse strand from various mobile terminal 1s 00
Road signal.Mobile terminal 1 00 generally participates in call, information receiving and transmitting and other types of communication.Each of certain base station reception is anti-
Processed in specific BS 270 to link signal.The data of acquisition are forwarded to the BSC 275 of correlation.BSC provides logical
Words resource allocation and the mobile management function of the coordination including the soft switching process between BS 270.BSC 275 will also be received
Data be routed to MSC 280, its provide for PSTN 290 formed interface extra route service.Similarly, PSTN
290 form interface with MSC 280, and MSC and BSC 275 form interface, and BSC 275 correspondingly controls BS 270 with by forward direction
Link signal is sent to mobile terminal 1 00.
Based on above-mentioned mobile terminal, a kind of photo processing method is the embodiment of the invention provides, Fig. 3 is according to the present invention
The flow chart of the photo processing method of embodiment, as shown in figure 3, the method is comprised the following steps:
Step S302, to being carried out by all target persons in the first camera and collective's photo of second camera shooting
Identification, wherein, first camera is used to measure the depth of field for shooting object, and the second camera is used to shoot object, institute
It is that the information shot by first camera and the second camera synthesizes to state photo;
Step S304, all target persons in collective's photo are obtained by first camera and the second camera
Depth of view information;
Step S306, the positional information of all target persons in collective's photo is determined according to the depth of view information, and is marked
The positional information of all target persons.
By above-mentioned steps, all target persons in collective's photo for being shot by dual camera are identified, passed through
The dual camera obtains the depth of view information of all target persons in collective's photo, and collective's photo is determined according to the depth of view information
In all target persons positional information, and mark the positional information of all target persons, solve group picture in correlation technique
Piece needs the relatively time-consuming problem of the information of user hand labeled personage, realizes the information of automatic sign target person, improves
Consumer's Experience.
Fig. 4 is the schematic diagram of target person sign in collective's photo according to embodiments of the present invention, as shown in figure 4, work as making
After two-shot collective photo, mobile terminal carries out picture recognition to the personage for shooting, and is obtained from background data base after identification
Take the relevant information of the personage, such as name, sex.Then the depth of view information for being obtained using twin-lens judges that the personage is shooting
The positional information in place, such as personage are in first row second from left to right.The all customer position informations of mobile terminal are all labeled in
Backside of photograph or lower section, are similar to the name sequence in graduating photograph.The related letter of personage that user can automatically generate to mobile terminal
Breath, positional information carry out manual modification, are then preserved.The positional information of personage or object is judged by depth of view information, so
The position of the object is marked in photo afterwards.
Further, to by all target persons in the first camera and collective's photo of second camera shooting
After being identified, the relevant information of all target persons is obtained from background data base, wherein, the relevant information includes:Name
And sex.
Further, being determined according to the depth of view information in collective's photo after the positional information of all target persons,
Position and the relevant information of all target persons are shown in collective's photo acceptance of the bid.Can at the back side of collective's photo or
Lower section indicates position and relevant information of all target persons.
Fig. 5 is the schematic diagram of stereoscopic imaging apparatus according to embodiments of the present invention, as shown in figure 5, stereoscopic imaging apparatus by
Two or more digital camera head compositions, these camera relative positions are fixed, can with different view gathered in synchronization
Image.11 and 12 is two digital camera heads, and 13 is two connection members of camera.11 and 12 are fixed on connection member 13
On.This imaging system can obtain two photos in synchronization, and this two photos transfers to subsequent module for processing, can be used for follow-up
Three-dimensional correction, Stereo matching, depth of field measurement.Institute in collective's photo is obtained by first camera and the second camera
The depth of view information for having target person can include:The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
Depth measuring module obtains the photo of the different visual angles that stereoscopic imaging apparatus shoot, to foreground part in two photos
Region generates depth map using stereoscopic measurement method.A kind of specific embodiment is given below.
Fig. 6 is the schematic diagram one of binocular range finding general principle according to embodiments of the present invention, as shown in fig. 6, binocular vision
It is simulation human vision principle, uses the method for the passive perceived distance of computer.From two or more point one objects of observation,
Obtain the image under different visual angles.
P is certain point in physical space, and c1 and c2 is that two video cameras are watched from diverse location, and m and m ' is p in different phases
Image space in machine.
According to the matching relationship of pixel between image, the skew between pixel is calculated by principle of triangulation to obtain
The three-dimensional information of object.Fig. 7 is the schematic diagram two of binocular range finding general principle according to embodiments of the present invention, as shown in fig. 7, P
For certain point in space, Ol and Or is respectively two camera centers in left and right, and xl and xr is the imaging point of the right and left.
The parallax d=xl-xr of imaging points of the point P in left images, using below equation calculate P points apart from Z.
Wherein f be in stereoscopic imaging apparatus two focal lengths of digital camera head (it is assumed here that two camera focal lengths one
Sample), T is the spacing between two digital camera heads.
Stereo Matching Algorithm is mainly xl and xr Corresponding matchings.Fig. 8 is that binocular according to embodiments of the present invention is surveyed
Away from the schematic diagram three of general principle, as shown in figure 8, a point p in reference picture, scans in an other sub-picture, find out
A pixel q the most similar to p, reach matching similitude definition be:Two local gray level window differences of pixel
Value is minimum.
The depth of view information of object is obtained, it is possible to calculate the actual range between object and camera, object dimensional is big
It is small, actual range between 2 points;Depth transducer is then to reflect to obtain scene in the scene using actively launching infrared light
Range information.
Because the subject goal of scene is different apart from the distance of video camera with background area, subject goal and background area
Depth value also can be different, and this provides a georeferencing feature, is conducive to figure for the separation of main body and background image below
As the accuracy of partitioning algorithm.
Traditional image segmentation algorithm is carried out in 2D planes, has lacked the space length feature of scene this important information,
Image segmentation algorithm is typically difficult the background and subject goal being precisely separating out in scene, using depth information of scene, with reference to biography
The epidemic algorithms of system such as figure cuts algorithm or meanshift algorithms and carries out main body and background image segmentation.
After image segmentation algorithm obtains different image-regions, in addition it is also necessary to enter the profile of image by morphological operation
Row is extracted, the filling of intra-zone cavity, it is ensured that the integrality in image segmentation region.
A kind of picture processing device is the embodiment of the invention provides, Fig. 9 is photo disposal dress according to embodiments of the present invention
The block diagram put, as shown in figure 9, including:
Identification module 92, for all target persons in the collective's photo to being shot by the first camera and second camera
Thing is identified, wherein, first camera is used to measure the depth of field for shooting object, and the second camera is used for subject
Body, the photo is that the information shot by first camera and the second camera synthesizes;
First acquisition module 94, for obtaining collective's photo by first camera and the second camera in own
The depth of view information of target person;
Determining module 96, the positional information for determining all target persons in collective's photo according to the depth of view information,
And mark the positional information of all target persons.
Further, first acquisition module 94, is additionally operable to obtain scene by binocular range finding platform or depth transducer
Depth information.
Figure 10 is the block diagram one of picture processing device according to the preferred embodiment of the invention, and as shown in Figure 10, the device is also
Including:
Second acquisition module 102, the relevant information for obtaining all target persons from background data base, wherein, the phase
Pass information includes:Name and sex.
Figure 11 is the block diagram two of picture processing device according to the preferred embodiment of the invention, as shown in figure 11, the determination mould
Block 96 includes:
Sign unit 112, position and the relevant information for showing all target persons in collective's photo acceptance of the bid.
One of the embodiment of the present invention additionally provides a kind of terminal, including said apparatus.
By the embodiment of the present invention, all target persons in collective's photo for being shot by dual camera are identified,
The depth of view information of all target persons in collective's photo is obtained by the dual camera, the collective is determined according to the depth of view information
The positional information of all target persons in photo, and the positional information of all target persons is marked, solve collection in correlation technique
Body photo needs the relatively time-consuming problem of the information of user hand labeled personage, realizes the information of automatic sign target person,
Improve Consumer's Experience.
It should be noted that herein, term " including ", "comprising" or its any other variant be intended to non-row
His property is included, so that process, method, article or device including a series of key elements not only include those key elements, and
And also include other key elements being not expressly set out, or also include for this process, method, article or device institute are intrinsic
Key element.In the absence of more restrictions, the key element limited by sentence "including a ...", it is not excluded that including this
Also there is other identical element in the process of key element, method, article or device.
The embodiments of the present invention are for illustration only, and the quality of embodiment is not represented.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can add the mode of required general hardware platform to realize by software, naturally it is also possible to by hardware, but in many cases
The former is more preferably implementation method.Based on such understanding, technical scheme is substantially done to prior art in other words
The part for going out contribution can be embodied in the form of software product, and the computer software product is stored in a storage medium
In (such as ROM/RAM, magnetic disc, CD), including some instructions are used to so that a station terminal equipment (can be mobile phone, computer, clothes
Business device, air-conditioner, or network equipment etc.) perform the method that each embodiment of the invention is somebody's turn to do.
Obviously, those skilled in the art should be understood that above-mentioned of the invention each module or each step can be with general
Computing device realize that they can be concentrated on single computing device, or be distributed in multiple computing devices and constituted
Network on, alternatively, the program code that they can be can perform with computing device be realized, it is thus possible to they are stored
Performed by computing device in the storage device, and in some cases, can be with different from shown in order execution herein
The step of going out or describe, or they are fabricated to each integrated circuit modules respectively, or by the multiple modules in them or
Step is fabricated to single integrated circuit module to realize.So, the present invention is not restricted to any specific hardware and software combination.
The preferred embodiments of the present invention are these are only, the scope of the claims of the invention is not thereby limited, it is every to utilize this hair
Equivalent structure or equivalent flow conversion that bright specification and accompanying drawing content are made, or directly or indirectly it is used in other related skills
Art field, is included within the scope of the present invention.
Claims (10)
1. a kind of photo processing method, it is characterised in that including:
To being identified by all target persons in the first camera and collective's photo of second camera shooting, wherein, institute
State the first camera for measure shoot the depth of field of object, for shooting object, the photo is by institute to the second camera
State the information synthesis that the first camera and the second camera shoot;
The depth of field for obtaining all target persons in collective's photo by first camera and the second camera is believed
Breath;
The positional information of all target persons in collective's photo is determined according to the depth of view information, and marks all target persons
The positional information of thing.
2. method according to claim 1, it is characterised in that obtained by first camera and the second camera
The depth of view information for taking all target persons in collective's photo includes:
The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
3. method according to claim 1 and 2, it is characterised in that to being taken the photograph by first camera and described second
After all target persons are identified in collective's photo that head shoots, methods described also includes:
The relevant information of all target persons is obtained from background data base, wherein, the relevant information includes:Name and sex.
4. method according to claim 3, it is characterised in that the positional information of all target persons of mark includes:
The position of all target persons and the relevant information are shown in collective's photo acceptance of the bid.
5. method according to claim 4, it is characterised in that the positional information of all target persons of mark includes:
The position of all target persons and the relevant information are indicated at the back side or lower section of collective's photo.
6. a kind of picture processing device, it is characterised in that including:
Identification module, is carried out for all target persons in the collective's photo to being shot by the first camera and second camera
Identification, wherein, first camera is used to measure the depth of field for shooting object, and the second camera is used to shoot object, institute
It is that the information shot by first camera and the second camera synthesizes to state photo;
First acquisition module, for by owning in first camera and second camera acquisition collective's photo
The depth of view information of target person;
Determining module, the positional information for determining all target persons in collective's photo according to the depth of view information, and
Mark the positional information of all target persons.
7. device according to claim 6, it is characterised in that first acquisition module, is additionally operable to
The depth information of platform or the depth transducer acquisition scene of being found range by binocular.
8. the device according to claim 6 or 7, it is characterised in that described device also includes:
Second acquisition module, the relevant information for obtaining all target persons from background data base, wherein, the relevant information
Including:Name and sex.
9. device according to claim 8, it is characterised in that the determining module includes:
Sign unit, for showing the position of all target persons and the relevant information in collective's photo acceptance of the bid.
10. a kind of terminal, it is characterised in that including the device any one of claim 6 to 9.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611225193.6A CN106791119B (en) | 2016-12-27 | 2016-12-27 | Photo processing method and device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201611225193.6A CN106791119B (en) | 2016-12-27 | 2016-12-27 | Photo processing method and device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106791119A true CN106791119A (en) | 2017-05-31 |
CN106791119B CN106791119B (en) | 2020-03-27 |
Family
ID=58921271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201611225193.6A Active CN106791119B (en) | 2016-12-27 | 2016-12-27 | Photo processing method and device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106791119B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651762A (en) * | 2016-12-27 | 2017-05-10 | 努比亚技术有限公司 | Photo processing method, device and terminal |
CN108259770A (en) * | 2018-03-30 | 2018-07-06 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108413584A (en) * | 2018-02-11 | 2018-08-17 | 四川虹美智能科技有限公司 | A kind of air-conditioning and its control method |
CN112532881A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010060A1 (en) * | 2006-06-09 | 2008-01-10 | Yasuharu Asano | Information Processing Apparatus, Information Processing Method, and Computer Program |
CN102033958A (en) * | 2010-12-28 | 2011-04-27 | Tcl数码科技(深圳)有限责任公司 | Photo sort management system and method |
CN102472882A (en) * | 2009-07-27 | 2012-05-23 | 佳能株式会社 | Image pickup apparatus that performs automatic focus control and control method for the image pickup apparatus |
CN103167294A (en) * | 2011-12-12 | 2013-06-19 | 豪威科技有限公司 | Imaging system and method having extended depth of field |
CN103412951A (en) * | 2013-08-22 | 2013-11-27 | 四川农业大学 | Individual-photo-based human network correlation analysis and management system and method |
CN103747180A (en) * | 2014-01-07 | 2014-04-23 | 宇龙计算机通信科技(深圳)有限公司 | Photo shooting method and photographing terminal |
CN104992120A (en) * | 2015-06-18 | 2015-10-21 | 广东欧珀移动通信有限公司 | Picture encryption method and mobile terminal |
CN105404395A (en) * | 2015-11-25 | 2016-03-16 | 北京理工大学 | Stage performance assisted training method and system based on augmented reality technology |
CN105791685A (en) * | 2016-02-29 | 2016-07-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
CN105827961A (en) * | 2016-03-22 | 2016-08-03 | 努比亚技术有限公司 | Mobile terminal and focusing method |
CN106060522A (en) * | 2016-06-29 | 2016-10-26 | 努比亚技术有限公司 | Video image processing device and method |
-
2016
- 2016-12-27 CN CN201611225193.6A patent/CN106791119B/en active Active
Patent Citations (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20080010060A1 (en) * | 2006-06-09 | 2008-01-10 | Yasuharu Asano | Information Processing Apparatus, Information Processing Method, and Computer Program |
CN102472882A (en) * | 2009-07-27 | 2012-05-23 | 佳能株式会社 | Image pickup apparatus that performs automatic focus control and control method for the image pickup apparatus |
CN102033958A (en) * | 2010-12-28 | 2011-04-27 | Tcl数码科技(深圳)有限责任公司 | Photo sort management system and method |
CN103167294A (en) * | 2011-12-12 | 2013-06-19 | 豪威科技有限公司 | Imaging system and method having extended depth of field |
CN103412951A (en) * | 2013-08-22 | 2013-11-27 | 四川农业大学 | Individual-photo-based human network correlation analysis and management system and method |
CN103747180A (en) * | 2014-01-07 | 2014-04-23 | 宇龙计算机通信科技(深圳)有限公司 | Photo shooting method and photographing terminal |
CN104992120A (en) * | 2015-06-18 | 2015-10-21 | 广东欧珀移动通信有限公司 | Picture encryption method and mobile terminal |
CN105404395A (en) * | 2015-11-25 | 2016-03-16 | 北京理工大学 | Stage performance assisted training method and system based on augmented reality technology |
CN105791685A (en) * | 2016-02-29 | 2016-07-20 | 广东欧珀移动通信有限公司 | Control method, control device and electronic device |
CN105827961A (en) * | 2016-03-22 | 2016-08-03 | 努比亚技术有限公司 | Mobile terminal and focusing method |
CN106060522A (en) * | 2016-06-29 | 2016-10-26 | 努比亚技术有限公司 | Video image processing device and method |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106651762A (en) * | 2016-12-27 | 2017-05-10 | 努比亚技术有限公司 | Photo processing method, device and terminal |
CN108413584A (en) * | 2018-02-11 | 2018-08-17 | 四川虹美智能科技有限公司 | A kind of air-conditioning and its control method |
CN108259770A (en) * | 2018-03-30 | 2018-07-06 | 广东欧珀移动通信有限公司 | Image processing method, device, storage medium and electronic equipment |
CN108259770B (en) * | 2018-03-30 | 2020-06-02 | Oppo广东移动通信有限公司 | Image processing method, image processing device, storage medium and electronic equipment |
CN112532881A (en) * | 2020-11-26 | 2021-03-19 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
CN112532881B (en) * | 2020-11-26 | 2022-07-05 | 维沃移动通信有限公司 | Image processing method and device and electronic equipment |
Also Published As
Publication number | Publication date |
---|---|
CN106791119B (en) | 2020-03-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN104835165B (en) | Image processing method and image processing device | |
CN106878588A (en) | A kind of video background blurs terminal and method | |
CN105100775B (en) | A kind of image processing method and device, terminal | |
CN106534590B (en) | A kind of photo processing method, device and terminal | |
CN106502693A (en) | A kind of method for displaying image and device | |
CN106331499B (en) | Focusing method and photographing device | |
CN107018331A (en) | A kind of imaging method and mobile terminal based on dual camera | |
CN105227837A (en) | A kind of image combining method and device | |
CN106713716A (en) | Double cameras shooting control method and device | |
CN106851113A (en) | A kind of photographic method and mobile terminal based on dual camera | |
CN106534693A (en) | Photo processing method, photo processing device and terminal | |
CN106973227A (en) | Intelligent photographing method and device based on dual camera | |
CN106506778A (en) | A kind of dialing mechanism and method | |
CN106846345A (en) | A kind of method for realizing interactive image segmentation, device and terminal | |
CN106791119A (en) | A kind of photo processing method, device and terminal | |
CN106772247A (en) | A kind of terminal and sound localization method | |
CN106569678A (en) | Display adjusting method and device of suspending operation board and terminal | |
CN106791111A (en) | A kind of images share method, device and terminal | |
CN106651867A (en) | Interactive image segmentation method and apparatus, and terminal | |
CN106713744A (en) | Method and apparatus for realizing light painting photography, and shooting device | |
CN105242483B (en) | The method and apparatus that a kind of method and apparatus for realizing focusing, realization are taken pictures | |
CN106373110A (en) | Method and device for image fusion | |
CN106657783A (en) | Image shooting device and method | |
CN104935822A (en) | Method and device for processing images | |
CN106851114A (en) | A kind of photo shows, photo generating means and method, terminal |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |