CN106385537A - Photographing method and terminal - Google Patents
Photographing method and terminal Download PDFInfo
- Publication number
- CN106385537A CN106385537A CN201610832486.4A CN201610832486A CN106385537A CN 106385537 A CN106385537 A CN 106385537A CN 201610832486 A CN201610832486 A CN 201610832486A CN 106385537 A CN106385537 A CN 106385537A
- Authority
- CN
- China
- Prior art keywords
- photographing instruction
- user
- identification information
- user title
- title
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Withdrawn
Links
- 238000000034 method Methods 0.000 title claims abstract description 33
- 238000005516 engineering process Methods 0.000 claims abstract description 30
- 238000013507 mapping Methods 0.000 claims abstract description 11
- 238000010168 coupling process Methods 0.000 claims description 43
- 238000005859 coupling reaction Methods 0.000 claims description 43
- 230000008878 coupling Effects 0.000 claims description 42
- 238000004458 analytical method Methods 0.000 claims description 7
- 238000012937 correction Methods 0.000 claims description 4
- 238000010586 diagram Methods 0.000 description 10
- 230000005540 biological transmission Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 5
- 230000004044 response Effects 0.000 description 4
- 238000003860 storage Methods 0.000 description 4
- 238000004891 communication Methods 0.000 description 3
- 238000012545 processing Methods 0.000 description 3
- 230000008859 change Effects 0.000 description 2
- 238000006243 chemical reaction Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000005611 electricity Effects 0.000 description 1
- 230000001815 facial effect Effects 0.000 description 1
- 210000004247 hand Anatomy 0.000 description 1
- 230000002452 interceptive effect Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 230000013011 mating Effects 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/67—Focus control based on electronic image sensor signals
-
- G—PHYSICS
- G10—MUSICAL INSTRUMENTS; ACOUSTICS
- G10L—SPEECH ANALYSIS TECHNIQUES OR SPEECH SYNTHESIS; SPEECH RECOGNITION; SPEECH OR VOICE PROCESSING TECHNIQUES; SPEECH OR AUDIO CODING OR DECODING
- G10L15/00—Speech recognition
- G10L15/22—Procedures used during a speech recognition process, e.g. man-machine dialogue
- G10L2015/223—Execution procedure of a spoken command
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computational Linguistics (AREA)
- Health & Medical Sciences (AREA)
- Audiology, Speech & Language Pathology (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- Acoustics & Sound (AREA)
- Telephonic Communication Services (AREA)
Abstract
The embodiment of the invention discloses a photographing method and terminal, and the method comprises the steps: obtaining a voice photographing instruction, wherein the voice photographing instruction is used for obtaining a to-be-photographed target object in a preview image, and the voice photographing instruction comprises the recognition information of the target object; searching a user's name matched with the recognition information in the voice photographing instruction from a preset comparison database, wherein the preset comparison database is a mapping database of the users' names and standard head portraits corresponding to the users' names; recognizing the target object corresponding to the matched user's name through employing the face recognition technology if the user's name matched with the recognition information in the voice photographing instruction, wherein the target object is the object matched with the standard head portrait corresponding to the matched user's name; and photographing the target object, thereby improving the recognition accuracy and photographing efficiency during automatic focusing photographing.
Description
Technical field
The present invention relates to electronic technology field, more particularly, to a kind of photographic method and terminal.
Background technology
With the development of electronic technology, terminal in the market all supports face's automatic identification technology, to realize focusing
Or take pictures.However, terminal is when carrying out face's automatic identification technology focusing or taking pictures, if user is in environment complexity, personnel
Intensive place, then face's automatic identification focus or take pictures and will recognize the face of a lot of unrelated persons, recognition accuracy is low.
And when there is identifying inaccurate problem, user must be clicked on again using finger, or the setting of cancellation facial recognition, from
And reduce efficiency of taking pictures.
In sum, existing terminal exist user focused using face's automatic identification technology in complex environment or
There is a problem of when person takes pictures that recognition accuracy is low, efficiency of taking pictures is low.
Content of the invention
The embodiment of the present invention provides a kind of photographic method and terminal, can improve user and be carried out using terminal in complex environment
Recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures.
In a first aspect, embodiments providing a kind of photographic method, methods described includes:
Obtain voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining what needs were taken pictures in preview image
Destination object, described voice photographing instruction includes the identification information of destination object;
What the described identification information that lookup is comprised with described voice photographing instruction in default comparison database was mated makes
User's title;Wherein, described default comparison database is user title and standard header corresponding with described user title
The mapping database of picture;
If finding the user title that the described identification information being comprised with described voice photographing instruction is mated, adopt people
Face technology of identification identifies the corresponding destination object of user title with described coupling;Wherein, described destination object is and institute
State the object that the user title corresponding standard header picture of coupling matches;
Described destination object is taken pictures.
On the other hand, embodiments provide a kind of terminal, described terminal includes:
Acquiring unit, for obtaining voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining in preview image
Take the destination object that needs are taken pictures, described voice photographing instruction includes the identification information of destination object;
Searching unit, for searching the described identification comprising with described voice photographing instruction in default comparison database
The user title of information matches;Wherein, described default comparison database be user title and with described user title
The mapping database of corresponding standard header picture;
Recognition unit, if for finding the user that the described identification information being comprised with described voice photographing instruction is mated
Title, then identify the corresponding destination object of user title with described coupling using face recognition technology;Wherein, described mesh
Mark object is the object matching with the user title corresponding standard header picture of described coupling;
Photographing unit, for taking pictures to described destination object.
The embodiment of the present invention is passed through to obtain voice photographing instruction, and searches in default comparison database and take pictures with voice
Instruct the user title of the identification information coupling comprising, and if finding the identification information comprising with voice photographing instruction and mate
User title, then identified and the corresponding destination object of user title mating using face recognition technology, and then right
Destination object is taken pictures so that terminal is taken pictures to specific user according to the voice photographing instruction obtaining, thus improve use
Family carries out recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures using terminal in complex environment.
Brief description
In order to be illustrated more clearly that embodiment of the present invention technical scheme, required use in embodiment being described below
Accompanying drawing be briefly described it should be apparent that, drawings in the following description are some embodiments of the present invention, general for this area
For logical technical staff, on the premise of not paying creative work, other accompanying drawings can also be obtained according to these accompanying drawings.
Fig. 1 is a kind of schematic flow diagram of photographic method that the embodiment of the present invention one provides;
Fig. 2 is a kind of schematic flow diagram of photographic method that the embodiment of the present invention two provides;
Fig. 3 is a kind of schematic block diagram of terminal that the embodiment of the present invention three provides;
Fig. 4 is a kind of schematic block diagram of terminal that the embodiment of the present invention four provides;
Fig. 5 is a kind of schematic block diagram of terminal that the embodiment of the present invention five provides.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation description is it is clear that described embodiment a part of embodiment that is the present invention, rather than whole embodiments.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under the premise of not making creative work
Example, broadly falls into the scope of protection of the invention.
It should be appreciated that when using in this specification and in the appended claims, special described by term " inclusion " instruction
Levy, entirety, step, the presence of operation, element and/or assembly, but be not precluded from one or more of the other feature, entirety, step,
Operation, the presence of element, assembly and/or its set or interpolation.
It is also understood that the term used in this description of the invention is merely for the sake of the mesh describing specific embodiment
And be not intended to limit the present invention.As used in description of the invention and appended claims, unless on
Hereafter clearly indicate other situations, otherwise " one " of singulative, " one " and " being somebody's turn to do " are intended to including plural form.
It will be further appreciated that, used in description of the invention and appended claims, term "and/or" is
Refer to any combinations of one or more of the associated item listed and be possible to combine, and include these combinations.
As used in this specification and in the appended claims, term " if " can be according to context quilt
Be construed to " when ... " or " once " or " in response to determining " or " in response to detecting ".Similarly, phrase " if it is determined that " or
" if [described condition or event] is detected " can be interpreted to mean according to context " once it is determined that " or " in response to true
Fixed " or " once [described condition or event] is detected " or " in response to [described condition or event] is detected ".
In implementing, the terminal described in the embodiment of the present invention including but not limited to such as has touch sensitive surface
Other of the mobile phone of (for example, touch-screen display and/or touch pad), laptop computer or tablet PC etc is just
Portable device.It is to be further understood that in certain embodiments, described equipment not portable communication device, but have tactile
Touch the desk computer of sensing surface (for example, touch-screen display and/or touch pad).
In discussion below, describe the terminal including display and touch sensitive surface.It is, however, to be understood that
It is that terminal can include one or more of the other physical user-interface device of such as physical keyboard, mouse and/or control bar.
Terminal supports various application programs, for example one or more of following:Drawing application program, demonstration application journey
Sequence, word-processing application, website create application program, disk imprinting application program, spreadsheet applications, game application
Program, telephony application, videoconference application, email application, instant messaging applications, exercise
Support application program, photo management application program, digital camera application program, digital camera application program, web-browsing application
Program, digital music player application and/or video frequency player application program.
The various application programs that can execute in terminal can be public using at least one of such as touch sensitive surface
Physical user-interface device.Can adjust among applications and/or in corresponding application programs and/or change and touch sensitive table
The corresponding information of display in the one or more functions in face and terminal.So, the public physical structure of terminal (for example, touches
Sensing surface) the various application programs with user interface directly perceived and transparent for a user can be supported.
Referring to Fig. 1, it is a kind of schematic flow diagram of photographic method that the embodiment of the present invention one provides.Take pictures in the present embodiment
The executive agent of method is terminal, and described terminal can be the mobile terminal such as mobile phone, panel computer.This side of taking pictures as shown in Figure 1
Method may include following steps:
S101:Obtain voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining in preview image and needs to clap
According to destination object, described voice photographing instruction includes the identification information of destination object.
Wherein, in embodiments of the present invention, voice photographing instruction can be sent by photographer it is also possible to be sent out by the person of being taken
Go out, be not limited herein.
S102:Search the described identification information comprising with described voice photographing instruction to mate in default comparison database
User title;Wherein, described default comparison database is user title and mark corresponding with described user title
The mapping database of accuracy picture.
Wherein, in embodiments of the present invention, standard header picture corresponding with user title refers to clearly demonstrate that use
The head portrait of person's frontal one feature.
S103:If finding the user title that the described identification information being comprised with described voice photographing instruction is mated,
The corresponding destination object of user title with described coupling is identified using face recognition technology;Wherein, described destination object
It is the object matching with the user title corresponding standard header picture of described coupling.
Wherein, in embodiments of the present invention, destination object refer to face feature with coupling user title corresponding
The object that the face feature of standard header picture matches.
S104:Described destination object is taken pictures.
Wherein, in embodiments of the present invention, after terminal recognition goes out destination object, terminal enables Autofocus Technology to mesh
Mark object taken pictures, and take pictures finish after captured photo is stored.
In embodiments of the present invention, terminal is passed through to obtain voice photographing instruction, and searches in default comparison database
The user title that the identification information being comprised with voice photographing instruction is mated, and if find the knowledge comprising with voice photographing instruction
The user title of other information matches, then identified and the corresponding target pair of user title mated using face recognition technology
As, and then destination object is taken pictures so that terminal is taken pictures to specific user according to the voice photographing instruction obtaining, thus
Improve user, in complex environment, recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures are carried out using terminal.
Referring to Fig. 2, it is a kind of schematic flow diagram of photographic method that the embodiment of the present invention two provides.Take pictures in the present embodiment
The executive agent of method is terminal, and described terminal can be the mobile terminal such as mobile phone, panel computer.This side of taking pictures as shown in Figure 2
Method may include following steps:
S201:Described default contrast set up by collection user title and standard header picture corresponding with described user title
Data base.
Wherein, in embodiments of the present invention, terminal gathers the user that user passes through touch display screen or key-press input
Title and standard header picture corresponding with user title, and according to user title and standard header picture corresponding with user title
Between one-to-one mapping relations set up comparison database.
S202:Obtain voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining in preview image and needs to clap
According to destination object, described voice photographing instruction includes the identification information of destination object.
Wherein, in embodiments of the present invention, voice photographing instruction can be sent by photographer it is also possible to be sent out by the person of being taken
Go out, be not limited herein.
S203:Search the described identification information comprising with described voice photographing instruction to mate in default comparison database
User title;Wherein, described default comparison database is user title and mark corresponding with described user title
The mapping database of accuracy picture.
Wherein, in embodiments of the present invention, standard header picture corresponding with user title refers to clearly demonstrate that use
The head portrait of person's frontal one feature.
Further, the described described identification that lookup is comprised with described voice photographing instruction in default comparison database
The user title of information matches is specially:
Described voice photographing instruction is carried out with speech analysis, obtains described identification information, in described default correction data
The user title mated with described identification information is searched in storehouse.
Wherein, in embodiments of the present invention, terminal after getting voice photographing instruction it is necessary first to finger that voice is taken pictures
Order carries out speech analysis, to obtain the identification information that voice photographing instruction is comprised, and this identification information is carried out with voice and literary composition
After the conversion of word, default data base searches the user title that word corresponding with this identification information matches.
For example, voice photographing instruction refers to Xiao Ming is taken pictures, and the voice messaging of Xiao Ming belongs to voice and takes pictures finger
Make comprised identification information, after terminal gets the voice photographing instruction that this is taken pictures to Xiao Ming, this voice of terminal-pair
Photographing instruction carries out speech analysis, to get Xiao Ming's voice messaging, and then carries out voice and word to this Xiao Ming's voice messaging
Conversion, to obtain and this Xiao Ming's voice messaging corresponding Xiao Ming Word message, and according to this Xiao Ming's Word message default
The user title matching with this Xiao Ming's Word message is searched in data base.
S204:If finding the user title that the described identification information being comprised with described voice photographing instruction is mated,
The corresponding destination object of user title with described coupling is identified using face recognition technology;Wherein, described destination object
It is the object matching with the user title corresponding standard header picture of described coupling.
Wherein, in embodiments of the present invention, destination object refer to face feature with coupling user title corresponding
The object that the face feature of standard header picture matches.
Further, if described find the user that the described identification information being comprised with described voice photographing instruction is mated
Title, then identified using face recognition technology and be specially with the corresponding destination object of user title of described coupling:
If finding the user title that the described identification information being comprised with described voice photographing instruction is mated, described
The user title corresponding standard header picture with described coupling is searched in comparison database;
The destination object being matched with described standard header picture according to the identification of described standard header picture.
Wherein, in embodiments of the present invention, terminal is searched and the user title pair mated first in comparison database
The standard header picture answered, and found in preview image according to the face feature with the user title corresponding standard header picture of coupling
The face matching with the face feature of this standard header picture.
For example, terminal get with the user title of Xiao Ming after, in comparison database search corresponding with Xiao Ming marks
Accuracy picture, and found special with the face of the standard header picture of Xiao Ming in preview image according to the face feature of the standard header picture of Xiao Ming
Levy the face matching, that is, terminal finds the face location of Xiao Ming in preview image.
S205:Described destination object is taken pictures.
Wherein, in embodiments of the present invention, after finding destination object using face recognition technology, terminal enables and opens terminal
Dynamic focusing technology is taken pictures to destination object, and take pictures finish after captured photo is stored.
Further, described photographic method also includes:
S206:If not finding the user name that the described identification information being comprised with described voice photographing instruction is mated
Claim, then reacquire voice photographing instruction;Or
Prompting updates described default comparison database.
Wherein, in embodiments of the present invention, terminal can be pointed out by modes such as speech play, word broadcasting, image players
User re-starts audio call, and reacquires voice photographing instruction after user re-starts audio call;Additionally, terminal
Can also by the modes such as speech play, word broadcasting, image player point out the new user title of user input and with use
Person's title corresponding standard header picture, and according to reacquire user title and standard header picture corresponding with user title and
Shi Gengxin comparison database.
In embodiments of the present invention, terminal is passed through to obtain voice photographing instruction, and searches in default comparison database
The user title that the identification information being comprised with voice photographing instruction is mated, and if find the knowledge comprising with voice photographing instruction
The user title of other information matches, then identified and the corresponding target pair of user title mated using face recognition technology
As, and then destination object is taken pictures so that terminal is taken pictures to specific user according to the voice photographing instruction obtaining, that is, eventually
End is by using the Data Matching between voice label and head portrait identification so that terminal can find the person's of being taken using voice
Face, and then the purpose reaching auto-focusing, taking pictures, reduce and cause the task recognition of mistake because of photo environment complexity, improve
User carries out recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures using terminal in complex environment.
Additionally, terminal is passed through do not finding the user title that the identification information being comprised with voice photographing instruction is mated
When, then reacquire voice photographing instruction, or prompting updates default comparison database so that terminal can be according to reacquisition
Voice photographing instruction or update after comparison database again find the person of being taken, further increase user in complicated ring
In border, recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures are carried out using terminal.
Referring to Fig. 3, Fig. 3 is a kind of schematic block diagram of terminal 3 provided in an embodiment of the present invention.Terminal 3 can for mobile phone,
The mobile terminals such as panel computer, but it is not limited to this, can also be other-end, not be limited herein.The terminal 3 of the present embodiment
Including each unit be used for executing each step in the corresponding embodiment of Fig. 1, specifically refer to the corresponding enforcement of Fig. 1 and Fig. 1
Associated description in example, does not repeat herein.The terminal 3 of the present embodiment includes:Acquiring unit 310, searching unit 320, identification are single
Unit 330 and photographing unit 340.
Acquiring unit 310 is used for obtaining voice photographing instruction;Wherein, described voice photographing instruction is used in preview image
Obtain the destination object needing to take pictures, described voice photographing instruction includes the identification information of destination object.
Such as, acquiring unit 310 obtains voice photographing instruction.Acquiring unit 310 after getting voice photographing instruction, to
Searching unit 320 sends voice photographing instruction.
Searching unit 320 is used for receiving the voice photographing instruction of acquiring unit 310 transmission, and in default comparison database
The user title that the described identification information that middle lookup is comprised with described voice photographing instruction is mated;Wherein, described default right
It is user title and the mapping database of standard header picture corresponding with described user title than data base.
Such as, searching unit 320 receives the voice photographing instruction that acquiring unit 310 sends, and in default correction data
The user title that the described identification information being comprised with described voice photographing instruction is mated is searched in storehouse.Searching unit 320 is connecing
Receive voice photographing instruction, and search the described identification information comprising with described voice photographing instruction in default comparison database
After the user title of coupling, send a notification message to recognition unit 330.
Recognition unit 330 is used for receiving the notification message of searching unit 320 transmission, if find taking pictures finger with described voice
The user title of the described identification information coupling that order comprises, then identified and the user name mated using face recognition technology
Claim corresponding described destination object;Wherein, described destination object is the user title corresponding standard header picture with described coupling
The object matching.
Such as, recognition unit 330 receive searching unit 320 send notification message, if this notification message be find with
The user title of the described identification information coupling that described voice photographing instruction comprises, then using face recognition technology identify with
The corresponding described destination object of user title of coupling.Recognition unit 330, after identifying destination object, destination object is sent out
Deliver to photographing unit 340.
Photographing unit 340 is used for receiving the destination object of recognition unit 330 transmission, and destination object is taken pictures.
Such as, photographing unit 340 receives the destination object that recognition unit 330 sends, and destination object is taken pictures.
In embodiments of the present invention, terminal 3 is passed through to obtain voice photographing instruction, and searches in default comparison database
The user title that the identification information being comprised with voice photographing instruction is mated, and if find the knowledge comprising with voice photographing instruction
The user title of other information matches, then identified and the corresponding target pair of user title mated using face recognition technology
As, and then destination object is taken pictures so that terminal is taken pictures to specific user according to the voice photographing instruction obtaining, thus
Improve user, in complex environment, recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures are carried out using terminal.
Refer to Fig. 4, Fig. 4 is a kind of schematic block diagram of terminal 4 provided in an embodiment of the present invention.Terminal 4 can be handss
The mobile terminals such as machine, panel computer, but it is not limited to this, can also be other-end, not be limited herein.The end of the present embodiment
The each unit that end 4 includes is used for each step executing in the corresponding embodiment of Fig. 2, specifically refers to the corresponding reality of Fig. 2 and Fig. 2
Apply the associated description in example, do not repeat herein.The terminal 4 of the present embodiment includes:Collecting unit 410, acquiring unit 420, lookup
Unit 430, recognition unit 440, photographing unit 450 and Tip element 460.
Collecting unit 410 is used for gathering user title and standard header picture corresponding with described user title, and sets up
Described default comparison database.
Acquiring unit 420 is used for obtaining voice photographing instruction;Wherein, described voice photographing instruction is used in preview image
Obtain the destination object needing to take pictures, described voice photographing instruction includes the identification information of destination object.
Such as, acquiring unit 420 obtains voice photographing instruction.Acquiring unit 420 after getting voice photographing instruction, to
Searching unit 430 sends voice photographing instruction.
Searching unit 430 is used for receiving the voice photographing instruction of acquiring unit 420 transmission, and in default comparison database
The user title that the described identification information that middle lookup is comprised with described voice photographing instruction is mated;Wherein, described default right
It is user title and the mapping database of standard header picture corresponding with described user title than data base.
Such as, searching unit 430 receives the voice photographing instruction that acquiring unit 420 sends, and in default correction data
The user title that the described identification information being comprised with described voice photographing instruction is mated is searched in storehouse.Searching unit 430 is connecing
Receive voice photographing instruction, and search the described identification information comprising with described voice photographing instruction in default comparison database
After the user title of coupling, send a notification message to recognition unit 440 with Tip element 460.
Further, searching unit 430 is additionally operable to described voice photographing instruction is carried out speech analysis, obtains described identification
Information, searches the user title mated with described identification information in described default comparison database.
Recognition unit 440 is used for receiving the notification message of searching unit 430 transmission, if find taking pictures finger with described voice
The user title of the described identification information coupling that order comprises, then identified and the described use mated using face recognition technology
The corresponding destination object of person's title;Wherein, described destination object is the user title corresponding standard header picture with described coupling
The object matching.
Such as, recognition unit 440 receive searching unit 430 send notification message, if this notification message be find with
The user title of the described identification information coupling that described voice photographing instruction comprises, then using face recognition technology identify with
The corresponding described destination object of user title of coupling.Recognition unit 440, after identifying destination object, destination object is sent out
Deliver to photographing unit 450.
Further, if recognition unit 440 is additionally operable to find the described identification letter comprising with described voice photographing instruction
The user title of breath coupling, then search the corresponding standard header of user title with described coupling in described comparison database
Picture;The destination object being matched with described standard header picture according to the identification of described standard header picture.
Photographing unit 450 is used for receiving the destination object of recognition unit 440 transmission, and destination object is taken pictures.
Such as, photographing unit 450 receives the destination object that recognition unit 440 sends, and destination object is taken pictures.
Tip element 460 is used for receiving the notification message of searching unit 430 transmission, if do not find clapping with described voice
The user title of the described identification information coupling comprising according to instruction, then reacquire voice photographing instruction;Or prompting updates
Described default comparison database.
Such as, Tip element 460 receives the notification message that searching unit 430 sends, and is not look in this notification message
Find the user title that the described identification information being comprised with described voice photographing instruction is mated, then reacquire voice and take pictures finger
Order;Or prompting updates described default comparison database.
In embodiments of the present invention, terminal 4 is passed through to obtain voice photographing instruction, and searches in default comparison database
The user title that the identification information being comprised with voice photographing instruction is mated, and if find the knowledge comprising with voice photographing instruction
The user title of other information matches, then identified and the corresponding target pair of user title mated using face recognition technology
As, and then destination object is taken pictures so that terminal is taken pictures to specific user according to the voice photographing instruction obtaining, that is, eventually
End is by using the Data Matching between voice label and head portrait identification so that terminal can find the person's of being taken using voice
Face, and then the purpose reaching auto-focusing, taking pictures, reduce and cause the task recognition of mistake because of photo environment complexity, improve
User carries out recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures using terminal in complex environment.
Additionally, terminal 4 is passed through do not finding the user name that the identification information being comprised with voice photographing instruction is mated
During title, then reacquire voice photographing instruction, or prompting updates default comparison database so that terminal can be according to obtaining again
Comparison database after the voice photographing instruction taking or renewal finds the person of being taken again, further increases user in complexity
In environment, recognition accuracy height when auto-focusing is taken pictures and efficiency of taking pictures are carried out using terminal.
Referring to Fig. 5, Fig. 5 is a kind of terminal schematic block diagram that the embodiment of the present invention five provides.This enforcement as depicted
Terminal 5 in example can include:One or more processors 510;One or more input equipments 520, one or more outputs
Equipment 530 and memorizer 540.Above-mentioned processor 510, input equipment 520, outut device 530 and memorizer 540 pass through bus
550 connections.
Memorizer 540 is used for storage program and instructs.
Processor 510 is used for the following operation of programmed instruction execution according to memorizer 540 storage:
Processor 510 is used for obtaining voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining in preview image
Take the destination object that needs are taken pictures, described voice photographing instruction includes the identification information of destination object.
Processor 510 is additionally operable to search the described knowledge comprising with described voice photographing instruction in default comparison database
The user title of other information matches;Wherein, described default comparison database be user title and with described user name
Claim the mapping database of corresponding standard header picture.
If processor 510 is additionally operable to find the use that the described identification information being comprised with described voice photographing instruction is mated
Person's title, then identify the corresponding destination object of user title with described coupling using face recognition technology;Wherein, described
Destination object is the object matching with the user title corresponding standard header picture of described coupling.
Processor 510 is additionally operable to described destination object is taken pictures.
Further, processor 510, specifically for described voice photographing instruction is carried out with speech analysis, obtains described identification
Information, searches the user title mated with described identification information in described default comparison database.
Further, processor 510 is additionally operable to gather user title and standard header corresponding with described user title
Picture, and set up described default comparison database.
Further, if processor 510 is specifically for finding the described identification letter comprising with described voice photographing instruction
The user title of breath coupling, then search the corresponding standard header of user title with described coupling in described comparison database
Picture;The destination object being matched with described standard header picture according to the identification of described standard header picture.
Further, if processor 510 is additionally operable to not find the described identification comprising with described voice photographing instruction
The user title of information matches, then reacquire voice photographing instruction;Or prompting updates described default comparison database.
In embodiments of the present invention, terminal 5 is passed through to obtain voice photographing instruction, and searches in default comparison database
The user title that the identification information being comprised with voice photographing instruction is mated, and if find the knowledge comprising with voice photographing instruction
The user title of other information matches, then identify destination object using face recognition technology, and then destination object taken pictures, make
Obtain terminal according to the voice photographing instruction obtaining, specific user to be taken pictures, thus improve user adopting in complex environment
Carry out the height of recognition accuracy when auto-focusing is taken pictures and efficiency of taking pictures with terminal.
It should be appreciated that in embodiments of the present invention, alleged processor 510 can be CPU (Central
Processing Unit, CPU), this processor can also be other general processors, digital signal processor (Digital
Signal Processor, DSP), special IC (Application Specific Integrated Circuit,
ASIC), ready-made programmable gate array (Field-Programmable Gate Array, FPGA) or other FPGAs
Device, discrete gate or transistor logic, discrete hardware components etc..General processor can be microprocessor or this at
Reason device can also be any conventional processor etc..
Input equipment 520 can include Trackpad, fingerprint adopts sensor (for gathering the finger print information of user and fingerprint
Directional information), mike etc., outut device 530 can include display (LCD etc.), speaker etc..
This memorizer 540 can include read only memory and random access memory, and to processor 510 provide instruction and
Data.The a part of of memorizer 540 can also include nonvolatile RAM.For example, memorizer 540 can also be deposited
The information of storage device type.
In implementing, processor 510 described in the embodiment of the present invention, input equipment 520, outut device 530 can
Execute described in the first embodiment of interactive interface generation method and the second embodiment of application provided in an embodiment of the present invention
Implementation, also can perform the embodiment of the present invention described by terminal implementation, will not be described here.
Those of ordinary skill in the art are it is to be appreciated that combine the list of each example of the embodiments described herein description
Unit and algorithm steps, can be with electronic hardware, computer software or the two be implemented in combination in, in order to clearly demonstrate hardware
With the interchangeability of software, generally describe composition and the step of each example in the above description according to function.This
A little functions to be executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint.Specially
Industry technical staff can use different methods to each specific application realize described function, but this realization is not
It is considered as beyond the scope of this invention.
Those skilled in the art can be understood that, for convenience of description and succinctly, the end of foregoing description
End and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be described here.
It should be understood that disclosed terminal and method in several embodiments provided herein, can be passed through it
Its mode is realized.For example, device embodiment described above is only schematically, for example, the division of described unit, and only
It is only a kind of division of logic function, actual can have other dividing mode when realizing, and for example multiple units or assembly can be tied
Close or be desirably integrated into another system, or some features can be ignored, or do not execute.In addition, shown or discussed phase
Coupling between mutually or direct-coupling or communication connection can be INDIRECT COUPLING or the communication by some interfaces, device or unit
Connect or electricity, machinery or other forms connect.
Step in present invention method can carry out order according to actual needs and adjust, merges and delete.
Unit in embodiment of the present invention terminal can merge according to actual needs, divides and delete.
The described unit illustrating as separating component can be or may not be physically separate, show as unit
The part showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize embodiment of the present invention scheme
Purpose.
In addition, can be integrated in a processing unit in each functional unit in each embodiment of the present invention it is also possible to
It is that unit is individually physically present or two or more units are integrated in a unit.Above-mentioned integrated
Unit both can be to be realized in the form of hardware, it would however also be possible to employ the form of SFU software functional unit is realized.
If described integrated unit is realized and as independent production marketing or use using in the form of SFU software functional unit
When, can be stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially
The part in other words prior art being contributed, or all or part of this technical scheme can be in the form of software product
Embody, this computer software product is stored in a storage medium, including some instructions with so that a computer
Equipment (can be personal computer, server, or network equipment etc.) executes the complete of each embodiment methods described of the present invention
Portion or part steps.And aforesaid storage medium includes:USB flash disk, portable hard drive, read only memory (ROM, Read-Only
Memory), random access memory (RAM, Random Access Memory), magnetic disc or CD etc. are various can store journey
The medium of sequence code.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art the invention discloses technical scope in, various equivalent modifications can be readily occurred in or replace
Change, these modifications or replacement all should be included within the scope of the present invention.Therefore, protection scope of the present invention should be with right
The protection domain requiring is defined.
Claims (10)
1. a kind of photographic method is it is characterised in that methods described includes:
Obtain voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining the target needing to take pictures in preview image
Object, described voice photographing instruction includes the identification information of destination object;
The user that the described identification information being comprised with described voice photographing instruction is mated is searched in default comparison database
Title;Wherein, described default comparison database be user title and standard header corresponding with described user title as
Mapping database;
If finding the user title that the described identification information being comprised with described voice photographing instruction is mated, known using face
Other technology identifies the corresponding destination object of user title with described coupling;Wherein, described destination object is and described
The object that the user title corresponding standard header picture joined matches;
Described destination object is taken pictures.
2. method according to claim 1 is it is characterised in that described search and institute's predicate in default comparison database
The user title of the described identification information coupling that sound photographing instruction comprises is specially:
Described voice photographing instruction is carried out with speech analysis, obtains described identification information, in described default comparison database
Search the user title mated with described identification information.
3. method according to claim 1 is it is characterised in that before described acquisition voice photographing instruction, methods described is also
Including:
Collection user title and standard header picture corresponding with described user title, and set up described default correction data
Storehouse.
If 4. method according to claim 1 is it is characterised in that described finding is comprised with described voice photographing instruction
The user title of described identification information coupling, then identified and the described user title pair mated using face recognition technology
The destination object answered is specially:
If finding the user title that the described identification information being comprised with described voice photographing instruction is mated, in described contrast
The user title corresponding standard header picture with described coupling is searched in data base;
The destination object being matched with described standard header picture according to the identification of described standard header picture.
If 5. the method according to any one of Claims 1-4 is it is characterised in that described finding is taken pictures with described voice
The user title of the described identification information coupling that instruction comprises, then using face recognition technology identify with described mate make
After the corresponding destination object of user's title, methods described also includes:
If not finding the user title that the described identification information being comprised with described voice photographing instruction is mated, again obtain
Take voice photographing instruction;Or
Prompting updates described default comparison database.
6. a kind of terminal is it is characterised in that described terminal includes:
Acquiring unit, for obtaining voice photographing instruction;Wherein, described voice photographing instruction is used for obtaining in preview image and needs
Destination object to be taken pictures, described voice photographing instruction includes the identification information of destination object;
Searching unit, for searching the described identification information comprising with described voice photographing instruction in default comparison database
The user title of coupling;Wherein, described default comparison database is user title and corresponding with described user title
Standard header picture mapping database;
Recognition unit, if for finding the user name that the described identification information being comprised with described voice photographing instruction is mated
Claim, then the corresponding destination object of user title with described coupling is identified using face recognition technology;Wherein, described target
Object is the object matching with the user title corresponding standard header picture of described coupling;
Photographing unit, for taking pictures to described destination object.
7. terminal according to claim 6 is it is characterised in that described searching unit is additionally operable to:Described voice is taken pictures finger
Order carries out speech analysis, obtains described identification information, searches and described identification information in described default comparison database
The user title joined.
8. terminal according to claim 6 is it is characterised in that described terminal also includes:
Collecting unit, for gathering user title and standard header picture corresponding with described user title, and sets up described pre-
If comparison database.
9. terminal according to claim 6 is it is characterised in that described recognition unit is additionally operable to:
If finding the user title that the described identification information being comprised with described voice photographing instruction is mated, in described contrast
The user title corresponding standard header picture with described coupling is searched in data base;
The destination object being matched with described standard header picture according to the identification of described standard header picture.
10. the terminal according to any one of claim 6 to 9 is it is characterised in that described terminal also includes:
Tip element, if for not finding the user that the described identification information being comprised with described voice photographing instruction is mated
Title, then reacquire voice photographing instruction;Or
Prompting updates described default comparison database.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832486.4A CN106385537A (en) | 2016-09-19 | 2016-09-19 | Photographing method and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610832486.4A CN106385537A (en) | 2016-09-19 | 2016-09-19 | Photographing method and terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106385537A true CN106385537A (en) | 2017-02-08 |
Family
ID=57936615
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610832486.4A Withdrawn CN106385537A (en) | 2016-09-19 | 2016-09-19 | Photographing method and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106385537A (en) |
Cited By (11)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803882A (en) * | 2017-02-27 | 2017-06-06 | 宇龙计算机通信科技(深圳)有限公司 | Focus method and its equipment |
CN107231470A (en) * | 2017-05-15 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
CN107820018A (en) * | 2017-11-30 | 2018-03-20 | 广东欧珀移动通信有限公司 | User's photographic method, device and equipment |
CN108052883A (en) * | 2017-11-30 | 2018-05-18 | 广东欧珀移动通信有限公司 | User's photographic method, device and equipment |
CN108712613A (en) * | 2018-05-30 | 2018-10-26 | 信利光电股份有限公司 | A kind of method, apparatus and relevant device of control image acquisition device |
WO2019090717A1 (en) * | 2017-11-10 | 2019-05-16 | 深圳传音通讯有限公司 | Autofocus method and device |
CN110769186A (en) * | 2019-10-28 | 2020-02-07 | 维沃移动通信有限公司 | Video call method, first electronic device and second electronic device |
CN112153477A (en) * | 2020-09-23 | 2020-12-29 | 合肥庐州管家家政服务有限公司 | Service method and system based on video |
CN112364733A (en) * | 2020-10-30 | 2021-02-12 | 重庆电子工程职业学院 | Intelligent security face recognition system |
CN114374815A (en) * | 2020-10-15 | 2022-04-19 | 北京字节跳动网络技术有限公司 | Image acquisition method, device, terminal and storage medium |
WO2022206605A1 (en) * | 2021-03-29 | 2022-10-06 | 华为技术有限公司 | Method for determining target object, and photographing method and device |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008258A1 (en) * | 2002-07-10 | 2004-01-15 | Aas Eric F. | Face recognition in a digital imaging system accessing a database of people |
CN104038742A (en) * | 2014-06-06 | 2014-09-10 | 上海卓悠网络科技有限公司 | Doorbell system based on face recognition technology |
CN104834905A (en) * | 2015-04-29 | 2015-08-12 | 河南城建学院 | Facial image identification simulation system and method |
CN105704389A (en) * | 2016-04-12 | 2016-06-22 | 上海斐讯数据通信技术有限公司 | Intelligent photo taking method and device |
-
2016
- 2016-09-19 CN CN201610832486.4A patent/CN106385537A/en not_active Withdrawn
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20040008258A1 (en) * | 2002-07-10 | 2004-01-15 | Aas Eric F. | Face recognition in a digital imaging system accessing a database of people |
CN104038742A (en) * | 2014-06-06 | 2014-09-10 | 上海卓悠网络科技有限公司 | Doorbell system based on face recognition technology |
CN104834905A (en) * | 2015-04-29 | 2015-08-12 | 河南城建学院 | Facial image identification simulation system and method |
CN105704389A (en) * | 2016-04-12 | 2016-06-22 | 上海斐讯数据通信技术有限公司 | Intelligent photo taking method and device |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN106803882A (en) * | 2017-02-27 | 2017-06-06 | 宇龙计算机通信科技(深圳)有限公司 | Focus method and its equipment |
CN107231470A (en) * | 2017-05-15 | 2017-10-03 | 努比亚技术有限公司 | Image processing method, mobile terminal and computer-readable recording medium |
WO2019090717A1 (en) * | 2017-11-10 | 2019-05-16 | 深圳传音通讯有限公司 | Autofocus method and device |
CN108052883B (en) * | 2017-11-30 | 2021-04-09 | Oppo广东移动通信有限公司 | User photographing method, device and equipment |
CN108052883A (en) * | 2017-11-30 | 2018-05-18 | 广东欧珀移动通信有限公司 | User's photographic method, device and equipment |
CN107820018A (en) * | 2017-11-30 | 2018-03-20 | 广东欧珀移动通信有限公司 | User's photographic method, device and equipment |
CN108712613A (en) * | 2018-05-30 | 2018-10-26 | 信利光电股份有限公司 | A kind of method, apparatus and relevant device of control image acquisition device |
CN110769186A (en) * | 2019-10-28 | 2020-02-07 | 维沃移动通信有限公司 | Video call method, first electronic device and second electronic device |
CN112153477A (en) * | 2020-09-23 | 2020-12-29 | 合肥庐州管家家政服务有限公司 | Service method and system based on video |
CN112153477B (en) * | 2020-09-23 | 2022-04-26 | 合肥庐州管家家政服务集团有限公司 | Service method and system based on video |
CN114374815A (en) * | 2020-10-15 | 2022-04-19 | 北京字节跳动网络技术有限公司 | Image acquisition method, device, terminal and storage medium |
CN112364733A (en) * | 2020-10-30 | 2021-02-12 | 重庆电子工程职业学院 | Intelligent security face recognition system |
CN112364733B (en) * | 2020-10-30 | 2022-07-26 | 重庆电子工程职业学院 | Intelligent security face recognition system |
WO2022206605A1 (en) * | 2021-03-29 | 2022-10-06 | 华为技术有限公司 | Method for determining target object, and photographing method and device |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106385537A (en) | Photographing method and terminal | |
US11790914B2 (en) | Methods and user interfaces for voice-based control of electronic devices | |
KR102054633B1 (en) | Devices, methods, and graphical user interfaces for wireless pairing with peripherals and displaying status information about the peripherals | |
US10120469B2 (en) | Vibration sensing system and method for categorizing portable device context and modifying device operation | |
CN109061985B (en) | User interface for camera effect | |
CN105955641B (en) | For the equipment, method and graphic user interface with object interaction | |
CN104487927B (en) | For selecting the equipment, method and graphic user interface of user interface object | |
EP2680110B1 (en) | Method and apparatus for processing multiple inputs | |
RU2703956C1 (en) | Method of managing multimedia files, an electronic device and a graphical user interface | |
CN105955520A (en) | Devices and Methods for Controlling Media Presentation | |
US9661133B2 (en) | Electronic device and method for extracting incoming/outgoing information and managing contacts | |
CN106415542A (en) | Structured suggestions | |
CN109375853A (en) | To equipment, method and the graphic user interface of the navigation of user interface hierarchical structure | |
CN104169857A (en) | Device, method, and graphical user interface for accessing an application in a locked device | |
US11556230B2 (en) | Data detection | |
CN107491283A (en) | For equipment, method and the graphic user interface of the presentation for dynamically adjusting audio output | |
CN107111415B (en) | Equipment, method and graphic user interface for mobile application interface element | |
CN110837557B (en) | Abstract generation method, device, equipment and medium | |
US10915778B2 (en) | User interface framework for multi-selection and operation of non-consecutive segmented information | |
JP2011008556A (en) | Portable communication device and communication device | |
KR20150026382A (en) | Electronic apparatus and method for contacts management in electronic apparatus | |
JP2020035468A (en) | Device, method, and graphical user interface used for moving application interface element | |
CN110399045A (en) | A kind of input method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
WW01 | Invention patent application withdrawn after publication | ||
WW01 | Invention patent application withdrawn after publication |
Application publication date: 20170208 |