CN106959761B - A kind of terminal photographic method, device and terminal - Google Patents
A kind of terminal photographic method, device and terminal Download PDFInfo
- Publication number
- CN106959761B CN106959761B CN201710253812.0A CN201710253812A CN106959761B CN 106959761 B CN106959761 B CN 106959761B CN 201710253812 A CN201710253812 A CN 201710253812A CN 106959761 B CN106959761 B CN 106959761B
- Authority
- CN
- China
- Prior art keywords
- user
- terminal
- target user
- facial image
- moving condition
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F3/00—Input arrangements for transferring data to be processed into a form capable of being handled by the computer; Output arrangements for transferring data from processing unit to output unit, e.g. interface arrangements
- G06F3/01—Input arrangements or combined input and output arrangements for interaction between user and computer
- G06F3/017—Gesture based interaction, e.g. based on a set of recognized hand gestures
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- General Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- User Interface Of Digital Computer (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of terminal photographic method, device and terminal, which includes obtaining triggering command of taking pictures;The people information in terminal shooting area is obtained according to the triggering command of taking pictures, which includes the facial image and images of gestures of at least one user;Target user is determined from least one user according to the facial image of each user, and the terminal is controlled according to the images of gestures of the target user and is taken pictures.Above-mentioned terminal photographic method, device and terminal can be realized according to the gesture motion of particular persons and be taken pictures, and user's manual contact terminal is not necessarily to, and method is simple.
Description
Technical field
The present invention relates to mobile communication technology field more particularly to a kind of terminal photographic methods, device and terminal.
Background technique
With the development of intelligent terminal, camera has become the standard configuration of big multi-mobile-terminal, with camera
The continuous promotion of energy, the camera function of mobile terminal are also stronger and stronger.
Currently, user can trigger execution photographing operation by two ways: manual palpation or pressing take pictures button come
Realization closely triggers, or remote triggering is realized using self-shooting bar, still, either closely triggers still closely
Triggering, the hand of user necessarily all are in grasping state, and both hands can not make desired movement of taking pictures, movement limitation of taking pictures
Property it is larger, and user pressing take pictures button or self-shooting bar when, slight jitter can occur under the drive of hand for fuselage, to make
The image that camera is projected generates phantom, and it is bad to take the image effect come.
Summary of the invention
The invention reside in providing a kind of terminal photographic method, device and terminal, with solve user using existing terminal into
When row is taken pictures, the technical problem for acting that limitation is larger, shooting effect is bad of taking pictures.
In order to solve the above technical problems, the embodiment of the present invention the following technical schemes are provided:
A kind of terminal photographic method, comprising:
Acquisition is taken pictures triggering command;
The people information in terminal shooting area is obtained according to the triggering command of taking pictures, the people information includes at least
The facial image and images of gestures of one user;
Target user is determined from least one described user according to the facial image of each user;
The terminal is controlled according to the images of gestures of the target user to take pictures.
In order to solve the above technical problems, the embodiment of the present invention also the following technical schemes are provided:
A kind of terminal camera arrangement, comprising:
First obtains module, for obtaining triggering command of taking pictures;
Second obtains module, obtains the people information in terminal shooting area, institute for triggering command of taking pictures according to
State the facial image and images of gestures that people information includes at least one user;
First determining module determines that target is used for the facial image according to each user from least one described user
Family;
Control module is taken pictures for controlling the terminal according to the images of gestures of the target user.
In order to solve the above technical problems, the embodiment of the present invention also the following technical schemes are provided:
A kind of terminal, which is characterized in that including memory, processor and store on a memory and can transport on a processor
Capable computer program, the processor realize any of the above-described terminal photographic method when executing the computer program.
Terminal photographic method, device and terminal of the present invention by obtaining triggering command of taking pictures, and are taken pictures according to this
Triggering command obtains the people information in terminal shooting area, which includes the facial image and hand of at least one user
Gesture image determines target user according to the facial image of each user, and according to the target later from least one user
The images of gestures of user controls the terminal and takes pictures, so as to be taken pictures according to the realization of the gesture motion of particular persons, without using
Family manual contact terminal, method is simple, and is avoided that interference of the other users to shooting process, strong antijamming capability.
Detailed description of the invention
With reference to the accompanying drawing, by the way that detailed description of specific embodiments of the present invention, technical solution of the present invention will be made
And other beneficial effects are apparent.
Fig. 1 is the flow diagram of terminal photographic method provided in an embodiment of the present invention;
Fig. 2 is the flow diagram of terminal photographic method provided in an embodiment of the present invention;
Fig. 3 is the schematic diagram of a scenario that more people provided in an embodiment of the present invention take pictures;
Fig. 4 is the structural schematic diagram of terminal camera arrangement provided in an embodiment of the present invention;
Fig. 5 is the structural schematic diagram of another terminal camera arrangement provided in an embodiment of the present invention;
Fig. 6 is the structural schematic diagram of terminal provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are only a part of the embodiments of the present invention, instead of all the embodiments.It is based on
Embodiment in the present invention, those skilled in the art's every other implementation obtained without creative efforts
Example, shall fall within the protection scope of the present invention.
The embodiment of the present invention provides a kind of terminal photographic method, device and terminal.It will be described in detail respectively below.
The present embodiment will be described from the angle of terminal camera arrangement, which specifically can integrate in hand
In the terminals such as machine, tablet computer or laptop.
A kind of terminal photographic method, comprising: obtain triggering command of taking pictures, and terminal is obtained according to the triggering command of taking pictures and is clapped
The people information in region is taken the photograph, which includes the facial image and images of gestures of at least one user, later, according to every
The facial image of one user determines target user from least one user, and is controlled according to the images of gestures of the target user
The terminal is taken pictures.
As shown in Figure 1, the detailed process of the terminal photographic method can be such that
S101, triggering command of taking pictures is obtained.
In the present embodiment, which can be user and clicks some button, such as when " more people take pictures " button
Generate, be also possible to terminal and enter to take pictures automatically generate when applying, specifically can according to actual needs depending on.
S102, triggering command of being taken pictures according to this obtain the people information in terminal shooting area, which includes extremely
The facial image and images of gestures of a few user.
In the present embodiment, which can be the information of multiple users, the corresponding facial image of each user and
Images of gestures.Specifically, background removal processing first can be carried out to the image of camera acquisition, the image of subject is obtained,
Then the face information for identifying each user by face recognition technology (such as regional characteristics analysis algorithm), is known by gesture
Other technology (such as Gesture Recognition Algorithm based on geometrical characteristic or computer vision) identifies the gesture information of each user.
S103, target user is determined from least one user according to the facial image of each user.
For example, above-mentioned steps S103 can specifically include:
The facial image of each user is matched with default facial image;
If successful match, using the corresponding user of the facial image of successful match as target user.
In the present embodiment, the default facial image can according to actual needs depending on, for example can be terminal user itself
Facial image.During taking pictures, there may be multiple users that are taken in shooting area, for avoid except designated user it
Other outer users that are taken generate interference to taking pictures, and can be looked for according to the default facial image set from being taken in user
To designated user, as target user, certainly, if the designated user is not present in current shooting region, terminal can be returned and be held
Row obtains the operation of the people information in terminal shooting area, until finding designated user.
S104, it the terminal is controlled according to the images of gestures of the target user takes pictures.
For example, above-mentioned steps S104 can specifically include:
Judge whether the images of gestures of the target user meets preset condition;
It takes pictures if satisfied, then controlling the terminal.
In the present embodiment, the preset condition can according to actual needs depending on, for example can be pre-set specified
Gesture motion, for example scissors, OK, both arms embrace chest etc..Specifically, directly the images of gestures of target user can be analyzed,
To determine whether the gesture in the images of gestures meets specified gesture motion, and executes and take pictures when meeting specified gesture motion
Operation, can take pictures without user's manual contact terminal, avoid terminal from generating shake, improve the stability of focusing,
Method is simple.
Certainly, to ensure when taking pictures, user has been ready for preparation of taking pictures, to avoid shaking because of user as far as possible
The focusing time of generation is too long, the problem of focusing stability difference, can be before analyzing images of gestures, to user's progress
Real-time dynamic monitoring and tracking, that is, before above-mentioned steps S104, which can also include:
1-1 determines the moving condition of the target user according to the facial image of the target user.
In the present embodiment, which refers mainly to motion state of the user relative to terminal, may include stablize and
Mobile two states.The method of determination of the moving condition can determine according to facial image, for example can calculate the facial image
In variable quantity of each pixel relative to the Euclidean distance of shooting area specified point (such as shooting frame vertex), then calculating should
The average value of variable quantity determines that moving condition, such as average value are greater than specified threshold and then illustrate in mobile shape according to average value
State, average value are less than or equal to specified threshold and then illustrate in stable state, certainly, since each facial image is all by very much
Pixel composition, therefore the calculation amount of the Euclidean distance can be very huge, slow so as to cause processing speed, for this purpose, can be with
The obvious region of face, such as eyes are obtained, the amount of being changed calculates, that is, above-mentioned steps 1-1 can specifically include:
1-1-1, object region is determined from the facial image of the target user.
In the present embodiment, the object region can according to actual needs depending on, such as can be eyes, mouth or its
His face specifically these biological characteristic information can identify mesh according to the colour of skin, lines etc. than more prominent region from face
Logo image region.
1-1-2, moving distance of the object region relative to shooting area is obtained.
For example, above-mentioned steps 1-1-2 can specifically include:
Obtain first display position of the current time object region in shooting area and the subsequent time mesh
Second display position of the logo image region in shooting area;
Calculate the difference between first display position and the second display position;
The difference is determined as moving distance of the object region relative to shooting area.
In the present embodiment, first display position and the second display position can be the object region relative to shooting
The Euclidean distance of the specified point (such as left vertex or right vertex etc.) in region, since the object region is by numerous pixel
Point is constituted, therefore first display position and the second display position are being averaged for the Euclidean distance between the pixel and specified point
Value.
1-1-3, the moving condition that the target user is determined according to the moving distance.
For example, above-mentioned steps 1-1-3 can specifically include:
Judge whether the moving distance is no more than preset threshold;
If so, determining that the moving condition of the target user indicates to stablize;
If not, it is determined that the moving condition of the target user indicates movement.
In the present embodiment, the preset threshold can according to actual needs depending on, for example can be 0.05, when the moving distance
It is smaller, and be no more than preset threshold when, illustrate the shaking amplitude very little of face, at this point it is possible to think user be in stablize shape
State illustrates that the shaking amplitude of face is bigger, at this point it is possible to think when the moving distance is bigger, and is more than preset threshold
User is still in moving condition.
1-2, when the moving condition indicates to stablize, triggering is executed and is taken pictures according to the images of gestures of the target user
Operation;When the moving condition indicates mobile, execution is returned according to the facial image of the target user and determines the target user's
The operation of moving condition.
In the present embodiment, the preset threshold can according to actual needs depending on, for example can be 0.05, when the moving distance
It is smaller, and when being no more than preset threshold, illustrate the shaking amplitude very little of face, at this point it is possible to take pictures, when the movement away from
From bigger, and when being more than preset threshold, illustrate that the shaking amplitude of face is bigger, user does not carry out preparation of taking pictures, and needs
Again moving condition is detected.
It can be seen from the above, terminal photographic method provided in this embodiment, by obtaining triggering command of taking pictures, and according to the bat
According to triggering command obtain terminal shooting area in people information, personage's information include at least one user facial image and
Images of gestures determines target user according to the facial image of each user, and according to the mesh later from least one user
The images of gestures of mark user controls the terminal and takes pictures, and so as to be taken pictures according to the realization of the gesture motion of particular persons, is not necessarily to
User's manual contact terminal, method is simple, and is avoided that interference of the other users to shooting process, strong antijamming capability.
Citing, is described in further detail by the method according to described in above-described embodiment below.
In the present embodiment, it will be described in detail so that the terminal camera arrangement specifically integrates in the terminal as an example.
Fig. 2, a kind of terminal photographic method are referred to, detailed process can be such that
S201, terminal obtain triggering command of taking pictures.
For example, which can be user and clicks some button, for example generated when " more people take pictures " button,
Be also possible to terminal and enter to take pictures automatically generate when applying, specifically can according to actual needs depending on.
S202, terminal obtain the people information in shooting area according to the triggering command of taking pictures, which includes extremely
The facial image and images of gestures of a few user.
For example, Fig. 3 is referred to, which may include the facial image and gesture of each user in user A, B and C
Image obtains the image of user A, B and C, so specifically, background removal processing first can be carried out to the image of camera acquisition
The face information for identifying each user by face recognition technology (such as regional characteristics analysis algorithm) afterwards, passes through gesture identification
Technology (such as Gesture Recognition Algorithm based on geometrical characteristic or computer vision) identifies the gesture information of each user.
S203, terminal match the facial image of each user with default facial image, if successful match, execute
Following step S204 can be returned if it fails to match and be executed above-mentioned steps S202.
For example, the default facial image can according to actual needs depending on, if the default facial image is the face of user A
Image, then successful match, if the default facial image is the facial image of user D, it fails to match.
S204, terminal are using the corresponding user of the facial image of successful match as target user, and from the target user's
Object region is determined in facial image.
For example, if the default facial image is the facial image of user A, user A can be determined as target user, and
Some region of the face of user A, such as eye areas are determined as object region, it specifically can be according to the colour of skin, lines
Object region is identified from the face of user A Deng these biological characteristic information.
S205, terminal obtain first display position of the current time object region in shooting area and under
Second display position of the one moment object region in shooting area.
For example, first display position and the second display position can be the eye areas of user A relative to shooting area
The Euclidean distance H1 and H2 on left vertex, certainly, due to the eye areas of user A be made of numerous pixel, therefore the H1 and
H2 is the average value of the Euclidean distance between the pixel of eye areas and left vertex.
S206, terminal calculate the difference between first display position and the second display position, and the difference is determined as
Moving distance of the object region relative to shooting area.
For example, the difference W between H1 and H2 can be calculated, as moving distance.
S207, terminal judge whether the moving distance is no more than preset threshold, if so, following step S208 is executed, if
It is no, then execute above-mentioned steps S205.
For example, which can be 0.05, and as W≤0.05, judgement is, as W > 0.05, judge no.
S208, terminal judge whether the images of gestures of the target user meets preset condition, if so, executing following step
S209, if it is not, then executing above-mentioned steps S207.
For example, which can be the specified gesture motion that user has been set in advance, and for example both arms embrace chest, Ke Yizhi
It connects and the images of gestures of user A is analyzed, when analysis result indicates that the gesture is that both arms embrace chest, judging result is "Yes".
S209, terminal execute photographing operation.
For example, terminal can judging result be "Yes" when, take pictures immediately, can also between every few seconds after again into bat
According to, to ensure that user can have the sufficient reaction time to set posture of taking pictures, specifically can according to the actual situation depending on.
It can be seen from the above, terminal photographic method provided in this embodiment, terminal, which passes through, obtains triggering command of taking pictures, and according to
The triggering command of taking pictures obtains the people information in terminal shooting area, which includes the face figure of at least one user
Picture and images of gestures later match the facial image of each user with default facial image, will if successful match
The corresponding user of the facial image of successful match determines target figure as target user from the facial image of the target user
As region, later, first display position and subsequent time of the current time object region in shooting area are obtained
Second display position of the object region in shooting area, and calculate first display position and the second display position it
Between difference the difference is determined as moving distance of the object region relative to shooting area, and judge the shifting later
Whether dynamic distance is no more than preset threshold, if so, photographing operation is executed, so as to realize according to the gesture motion of particular persons
It takes pictures, is not necessarily to user's manual contact terminal, method is simple, and shooting effect is good, and is avoided that other users to shooting process
Interference, strong antijamming capability.
For convenient for better implementation terminal photographic method provided in an embodiment of the present invention, the embodiment of the present invention also provides one kind
Device based on above-mentioned terminal photographic method.Wherein the meaning of noun is identical with above-mentioned terminal photographic method, and specific implementation is thin
Section can be with reference to the explanation in embodiment of the method.
Referring to Fig. 4, Fig. 4 is the structural schematic diagram of terminal camera arrangement provided in an embodiment of the present invention, which takes pictures
Device may include: that the first acquisition module 10, second obtains module 20, the first determining module 30 and control module 40, in which:
(1) first obtains module 10
First obtains module 10, for obtaining triggering command of taking pictures.
In the present embodiment, which can be user and clicks some button, such as when " more people take pictures " button
Generate, be also possible to terminal and enter to take pictures automatically generate when applying, specifically can according to actual needs depending on.
(2) second obtain module 20
Second obtains module 20, should for obtaining the people information in terminal shooting area according to the triggering command of taking pictures
People information includes the facial image and images of gestures of at least one user.
In the present embodiment, which can be the information of multiple users, the corresponding facial image of each user and
Images of gestures.Specifically, second obtain module 20 can first to camera acquisition image carry out background removal processing, obtain by
Then the image of subject identifies the face letter of each user by face recognition technology (such as regional characteristics analysis algorithm)
Breath, identifies each user by Gesture Recognition (such as Gesture Recognition Algorithm based on geometrical characteristic or computer vision)
Gesture information.
(3) first determining modules 30
First determining module 30 determines that target is used for the facial image according to each user from least one user
Family.
For example, first determining module 30 specifically can be used for:
The facial image of each user is matched with default facial image;
If successful match, using the corresponding user of the facial image of successful match as target user.
In the present embodiment, the default facial image can according to actual needs depending on, for example can be terminal user itself
Facial image.During taking pictures, there may be multiple users that are taken in shooting area, for avoid except designated user it
Other outer users that are taken generate interference to taking pictures, and can be looked for according to the default facial image set from being taken in user
To designated user, as target user, certainly, if the designated user is not present in current shooting region, terminal can be returned and be held
Row obtains the operation of the people information in terminal shooting area, until finding designated user.
(4) control module 40
Control module 40 controls the terminal for the images of gestures according to the target user and takes pictures.
The control module is specifically used for:
Judge whether the images of gestures of the target user meets preset condition;
It takes pictures if satisfied, then controlling the terminal.
In the present embodiment, the preset condition can according to actual needs depending on, for example can be pre-set specified
Gesture motion, for example scissors, OK, both arms embrace chest etc..Specifically, directly the images of gestures of target user can be analyzed,
To determine whether the gesture in the images of gestures meets specified gesture motion, and executes and take pictures when meeting specified gesture motion
Operation, can take pictures without user's manual contact terminal, avoid terminal from generating shake, improve the stability of focusing,
Method is simple.
Certainly, to ensure when taking pictures, user has been ready for preparation of taking pictures, to avoid shaking because of user as far as possible
The focusing time of generation is too long, the problem of focusing stability difference, can before control module 40 analyzes images of gestures,
Real-time dynamic monitoring and tracking are carried out to user, that is, referring to Fig. 5, which can also include second determining
Module 50 and trigger module 60, in which:
Second determining module 50 is carried out for controlling the terminal according to the images of gestures of the target user in the control module
Before taking pictures, the moving condition of the target user is determined according to the facial image of the target user.
In the present embodiment, which refers mainly to motion state of the user relative to terminal, may include stablize and
Mobile two states.The method of determination of the moving condition can determine according to facial image, for example can calculate the facial image
In variable quantity of each pixel relative to the Euclidean distance of shooting area specified point (such as shooting frame vertex), then calculating should
The average value of variable quantity determines that moving condition, such as average value are greater than specified threshold and then illustrate in mobile shape according to average value
State, average value are less than or equal to specified threshold and then illustrate in stable state, certainly, since each facial image is all by very much
Pixel composition, therefore the calculation amount of the Euclidean distance can be very huge, slow so as to cause processing speed, for this purpose, can be with
The obvious region of face, such as eyes are obtained, the amount of being changed calculates, that is, second determining module 50 specifically can be with
Determine that submodule 51, acquisition submodule 52 and second determine submodule 53 including first, in which:
First determines submodule 51, for determining object region from the facial image of the target user.
In the present embodiment, the object region can according to actual needs depending on, such as can be eyes, mouth or its
His face is than more prominent region, specifically, first determines that submodule 51 can be believed according to these biological characteristics such as the colour of skin, lines
Breath identifies object region from face.
Acquisition submodule 52, for obtaining moving distance of the object region relative to shooting area.
For example, the acquisition submodule 52 specifically can be used for:
Obtain first display position of the current time object region in shooting area and the subsequent time mesh
Second display position of the logo image region in shooting area;
Calculate the difference between first display position and the second display position;
The difference is determined as moving distance of the object region relative to shooting area.
In the present embodiment, first display position and the second display position can be the object region relative to shooting
The Euclidean distance of the specified point (such as left vertex or right vertex etc.) in region, since the object region is by numerous pixel
Point is constituted, therefore first display position and the second display position are being averaged for the Euclidean distance between the pixel and specified point
Value.
Second determines submodule 53, for determining the moving condition of the target user according to the moving distance.
For example, the second determining submodule 53 specifically can be used for:
Judge whether the moving distance is no more than preset threshold;
If so, determining that the moving condition of the target user indicates to stablize;
If not, it is determined that the moving condition of the target user indicates movement.
In the present embodiment, the preset threshold can according to actual needs depending on, for example can be 0.05, when the moving distance
It is smaller, and be no more than preset threshold when, illustrate the shaking amplitude very little of face, at this point it is possible to think user be in stablize shape
State illustrates that the shaking amplitude of face is bigger, at this point it is possible to think when the moving distance is bigger, and is more than preset threshold
User is still in moving condition.
Trigger module 60, for triggering the control module 40 and executing according to the target when the moving condition indicates to stablize
The operation that the images of gestures of user is taken pictures;When the moving condition indicates mobile, second determining module 50 execution is triggered
The operation of the moving condition of the target user is determined according to the facial image of the target user
In the present embodiment, the preset threshold can according to actual needs depending on, for example can be 0.05, when the moving distance
It is smaller, and when being no more than preset threshold, illustrate the shaking amplitude very little of face, at this point it is possible to take pictures, when the movement away from
From bigger, and when being more than preset threshold, illustrate that the shaking amplitude of face is bigger, user does not carry out preparation of taking pictures, and needs
Again moving condition is detected.
When it is implemented, above each unit can be used as independent entity to realize, any combination can also be carried out, is made
It is realized for same or several entities, the specific implementation of above each unit can be found in the embodiment of the method for front, herein not
It repeats again.
It can be seen from the above, terminal camera arrangement provided in this embodiment, obtains triggering of taking pictures by the first acquisition module 10
Instruction, the second acquisition module 20 obtain the people information in terminal shooting area, the people information according to the triggering command of taking pictures
Facial image and images of gestures including at least one user, later, the first determining module 30 is according to the face figure of each user
As determining target user from least one user, control module 40 controls the terminal according to the images of gestures of the target user
It takes pictures, so as to take pictures according to the realization of the gesture motion of particular persons, is not necessarily to user's manual contact terminal, method is simple,
And it is avoided that interference of the other users to shooting process, strong antijamming capability.
The present invention also provides a kind of terminals, such as tablet computer, mobile phone equipment, referring to Fig. 6, Fig. 6 is that the present invention is implemented
The terminal structure schematic diagram that example provides.The terminal 700 may include radio frequency (RF, Radio Frequency) circuit 701 including
There are the memory 702, input unit 703, display unit 704, sensor of one or more computer readable storage mediums
704, voicefrequency circuit 706, Wireless Fidelity (WiFi, Wireless Fidelity) module 707, include one or one with
The components such as the processor 708 and power supply 709 of upper processing core.It will be understood by those skilled in the art that end shown in Fig. 6
The restriction of end structure not structure paired terminal, may include than illustrating more or fewer components, or the certain components of combination, or
The different component layout of person.
Radio circuit 701 can be used for receiving and sending messages or communication process in signal send and receive, particularly, by base station
Downlink information receive after, transfer to one or more than one processor 708 processing;In addition, the data for being related to uplink are sent
To base station.In general, radio circuit 701 includes but is not limited to antenna, at least one amplifier, tuner, one or more oscillations
Device, subscriber identity module (SIM, Subscriber Identity Module) card, transceiver, coupler, low noise amplification
Device (LNA, Low Noise Amplifier), duplexer etc..In addition, radio circuit 701 can also by wireless communication with network
It is communicated with other equipment.Any communication standard or agreement, including but not limited to global system for mobile telecommunications can be used in the wireless communication
System (GSM, Global System of Mobile communication), general packet radio service (GPRS, General
Packet Radio Service), CDMA (CDMA, Code Division Multiple Access), wideband code division it is more
Location (WCDMA, Wideband Code Division Multiple Access), long term evolution (LTE, Long Term
Evolution), Email, short message service (SMS, Short Messaging Service) etc..
Memory 702 can be used for storing application program and data.It include that can hold in the application program that memory 702 stores
Line code.Application program can form various functional modules.Processor 708 is stored in the application journey of memory 702 by operation
Sequence, thereby executing various function application and data processing.Memory 702 can mainly include storing program area and storing data
Area, wherein storing program area can application program needed for storage program area, at least one function (such as sound-playing function,
Image player function etc.) etc.;Storage data area, which can be stored, uses created data (such as audio data, electricity according to terminal
Script for story-telling etc.) etc..In addition, memory 502 may include high-speed random access memory, it can also include nonvolatile memory,
A for example, at least disk memory, flush memory device or other volatile solid-state parts.Correspondingly, memory 502 is gone back
It may include Memory Controller, to provide the access of processor 508 and input unit 503 to memory 502.
Input unit 703 can be used for receiving number, character information or the user's characteristic information (such as fingerprint) of input, and
Generate keyboard related with user setting and function control, mouse, operating stick, optics or trackball signal input.Specifically
Ground, in a specific embodiment, input unit 703 may include touch sensitive surface and other input equipments.Touch sensitive surface,
Referred to as touch display screen or Trackpad, collect user on it or nearby touch operation (such as user using finger, touching
The operations of any suitable object or attachment on touch sensitive surface or near touch sensitive surface such as pen), and according to preset
Formula drives corresponding attachment device.Optionally, touch sensitive surface may include both touch detecting apparatus and touch controller.
Wherein, the touch orientation of touch detecting apparatus detection user, and touch operation bring signal is detected, transmit a signal to touch
Controller;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processing
Device 708, and order that processor 708 is sent can be received and executed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray
And the multiple types such as surface acoustic wave realize touch sensitive surface.In addition to touch sensitive surface, input unit 703 can also include other inputs
Equipment.Specifically, other input equipments can include but is not limited to physical keyboard, function key (such as volume control button, switch
Key etc.), fingerprint recognition mould group, trace ball, mouse, one of operating stick etc. or a variety of.
Display unit 704 can be used for showing information input by user or be supplied to user information and terminal it is various
Graphical user interface, these graphical user interface can be made of figure, text, icon, video and any combination thereof.Display
Unit 704 may include display panel.Optionally, can using liquid crystal display (LCD, Liquid Crystal Display),
The forms such as Organic Light Emitting Diode (OLED, Organic Light-Emitting Diode) configure display panel.Further
, touch sensitive surface can cover display panel, after touch sensitive surface detects touch operation on it or nearby, send processing to
Device 708 is followed by subsequent processing device 708 and is provided on a display panel accordingly according to the type of touch event to determine the type of touch event
Visual output.Although touch sensitive surface and display panel are to realize input and input as two independent components in Fig. 6
Function, but in some embodiments it is possible to touch sensitive surface and display panel are integrated and realizes and outputs and inputs function.
Terminal may also include at least one sensor 705, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor may include ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjust the brightness of display panel, proximity sensor can close display panel and/or back when terminal is moved in one's ear
Light.As a kind of motion sensor, gravity accelerometer can detect (generally three axis) acceleration in all directions
Size can detect that size and the direction of gravity when static, can be used to identify mobile phone posture application (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;It can also configure as terminal
The other sensors such as gyroscope, barometer, hygrometer, thermometer, infrared sensor, details are not described herein.
Voicefrequency circuit 706 can provide the audio interface between user and terminal by loudspeaker, microphone.Voicefrequency circuit
The audio data received can be converted into electric signal by 706, be transferred to loudspeaker, be converted to voice signal output by loudspeaker;
On the other hand, the voice signal of collection is converted to electric signal by microphone, is converted to audio number after being received by voicefrequency circuit 706
According to then such as another terminal will be sent to through radio circuit 701 after audio data output processor 708 handles, or by sound
Frequency is further processed according to output to memory 702.Voicefrequency circuit 706 is also possible that earphone jack, to provide peripheral hardware
The communication of earphone and terminal.
Wireless Fidelity (WiFi) belongs to short range wireless transmission technology, and terminal can be helped by wireless fidelity module 707
User sends and receive e-mail, browses webpage and access streaming video etc., it provides wireless broadband internet access for user.
Although Fig. 6 shows wireless fidelity module 707, but it is understood that, and it is not belonging to must be configured into for terminal, completely may be used
To omit within the scope of not changing the essence of the invention as needed.
Processor 708 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to
It crosses operation or executes the application program being stored in memory 702, and call the data being stored in memory 702, execute
The various functions and processing data of terminal, to carry out integral monitoring to terminal.Optionally, processor 708 may include one or
Multiple processing cores;Preferably, processor 708 can integrate application processor and modem processor, wherein application processor
Main processing operation system, user interface and application program etc., modem processor mainly handles wireless communication.It is understood that
, above-mentioned modem processor can not also be integrated into processor 708.
Terminal further includes the power supply 709 (such as battery) powered to all parts.Preferably, power supply can pass through power supply pipe
Reason system and processor 708 are logically contiguous, to realize management charging, electric discharge and power managed by power-supply management system
Etc. functions.Power supply 709 can also include one or more direct current or AC power source, recharging system, power failure inspection
The random components such as slowdown monitoring circuit, power adapter or inverter, power supply status indicator.
Although being not shown in Fig. 6, terminal can also include camera, bluetooth module etc., and details are not described herein.
Specifically in the present embodiment, which can be screen, which is provided with emission electrode and receives electricity
Pole (not shown), the processor 708 in terminal can be according to following instruction, by one or more application program
The corresponding executable code of process is loaded into memory 702, and is run and be stored in memory 702 by processor 708
Application program, to realize various functions:
Acquisition is taken pictures triggering command;
The people information in terminal shooting area is obtained according to the triggering command of taking pictures, which includes at least one
The facial image and images of gestures of user;
Target user is determined from least one user according to the facial image of each user;
The terminal is controlled according to the images of gestures of the target user to take pictures.
For details, reference can be made to above-described embodiments for the implementation method respectively operated above, and details are not described herein again.
The terminal may be implemented achieved by the terminal camera arrangement of any terminal provided by the embodiment of the present invention
Effective effect is detailed in the embodiment of front, and details are not described herein.
Those of ordinary skill in the art will appreciate that all or part of the steps in the various methods of above-described embodiment is can
It is completed with instructing relevant hardware by program, which can be stored in a computer readable storage medium, storage
Medium may include: read-only memory (ROM, Read Only Memory), random access memory (RAM, Random
Access Memory), disk or CD etc..
A kind of terminal photographic method, device and terminal is provided for the embodiments of the invention above to be described in detail,
Used herein a specific example illustrates the principle and implementation of the invention, and the explanation of above embodiments is only used
In facilitating the understanding of the method and its core concept of the invention;Meanwhile for those skilled in the art, think of according to the present invention
Think, there will be changes in the specific implementation manner and application range, in conclusion the content of the present specification should not be construed as pair
Limitation of the invention.
Claims (15)
1. a kind of terminal photographic method characterized by comprising
Acquisition is taken pictures triggering command;
The people information in terminal shooting area is obtained according to the triggering command of taking pictures, the people information includes at least one
The facial image and images of gestures of user;
Target user is determined from least one described user according to the facial image of each user;
Determine that the moving condition of the target user, the moving condition refer to the mesh according to the facial image of the target user
Mark motion state of the user relative to the terminal;
The operation taken pictures according to the images of gestures of the target user is executed according to moving condition triggering.
2. terminal photographic method according to claim 1, which is characterized in that described triggered according to the moving condition executes
The operation taken pictures according to the images of gestures of the target user, comprising:
When the moving condition indicates to stablize, triggering executes the behaviour to take pictures according to the images of gestures of the target user
Make;
When the moving condition indicates mobile, execution is returned according to the facial image of the target user and determines that the target is used
The operation of the moving condition at family.
3. terminal photographic method according to claim 2, which is characterized in that the face figure according to the target user
Moving condition as determining the target user, comprising:
Object region is determined from the facial image of the target user;
Obtain moving distance of the object region relative to shooting area;
The moving condition of the target user is determined according to the moving distance.
4. terminal photographic method according to claim 3, which is characterized in that the acquisition object region is opposite
In the moving distance of shooting area, comprising:
Obtain mesh described in the first display position and subsequent time of the object region in shooting area described in current time
Second display position of the logo image region in shooting area;
Calculate the difference between first display position and the second display position;
The difference is determined as moving distance of the object region relative to shooting area.
5. terminal photographic method according to claim 3, which is characterized in that described according to moving distance determination
The moving condition of target user, comprising:
Judge whether the moving distance is no more than preset threshold;
If so, determining that the moving condition of the target user indicates to stablize;
If not, it is determined that the moving condition of the target user indicates movement.
6. terminal photographic method described in any one of -5 according to claim 1, which is characterized in that described according to each user
Facial image from least one described user determine target user, comprising:
The facial image of each user is matched with default facial image;
If successful match, using the corresponding user of the facial image of successful match as target user.
7. terminal photographic method described in any one of -5 according to claim 1, which is characterized in that described according to the target
The images of gestures of user controls the terminal and takes pictures, comprising:
Judge whether the images of gestures of the target user meets preset condition;
It takes pictures if satisfied, then controlling the terminal.
8. a kind of terminal camera arrangement characterized by comprising
First obtains module, for obtaining triggering command of taking pictures;
Second obtains module, obtains the people information in terminal shooting area, the people for triggering command of taking pictures according to
Object information includes the facial image and images of gestures of at least one user;
First determining module determines target user for the facial image according to each user from least one described user;
Second determining module, for determining the moving condition of the target user, institute according to the facial image of the target user
It states moving condition and refers to motion state of the target user relative to the terminal;
Trigger module, for according to the moving condition trigger control module execute according to the images of gestures of the target user into
The operation that row is taken pictures.
9. terminal camera arrangement according to claim 8, which is characterized in that
The trigger module is specifically used for: when the moving condition indicates to stablize, trigger control module is executed according to the mesh
The operation that the images of gestures of mark user is taken pictures;When the moving condition indicates mobile, second determining module is triggered
Execute the operation that the moving condition of the target user is determined according to the facial image of the target user.
10. terminal camera arrangement according to claim 9, which is characterized in that second determining module specifically includes:
First determines submodule, for determining object region from the facial image of the target user;
Acquisition submodule, for obtaining moving distance of the object region relative to shooting area;
Second determines submodule, for determining the moving condition of the target user according to the moving distance.
11. terminal camera arrangement according to claim 10, which is characterized in that the acquisition submodule is specifically used for:
Obtain mesh described in the first display position and subsequent time of the object region in shooting area described in current time
Second display position of the logo image region in shooting area;
Calculate the difference between first display position and the second display position;
The difference is determined as moving distance of the object region relative to shooting area.
12. terminal camera arrangement according to claim 10, which is characterized in that described second determines that submodule is specifically used
In:
Judge whether the moving distance is no more than preset threshold;
If so, determining that the moving condition of the target user indicates to stablize;
If not, it is determined that the moving condition of the target user indicates movement.
13. the terminal camera arrangement according to any one of claim 8-12, which is characterized in that described first determines mould
Block is specifically used for:
The facial image of each user is matched with default facial image;
If successful match, using the corresponding user of the facial image of successful match as target user.
14. the terminal camera arrangement according to any one of claim 8-12, which is characterized in that the control module tool
Body is used for:
Judge whether the images of gestures of the target user meets preset condition;
It takes pictures if satisfied, then controlling the terminal.
15. a kind of terminal, which is characterized in that including memory, processor and store on a memory and can transport on a processor
Capable computer program, the processor realize method as claimed in claim 1 when executing the computer program.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253812.0A CN106959761B (en) | 2017-04-18 | 2017-04-18 | A kind of terminal photographic method, device and terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201710253812.0A CN106959761B (en) | 2017-04-18 | 2017-04-18 | A kind of terminal photographic method, device and terminal |
Publications (2)
Publication Number | Publication Date |
---|---|
CN106959761A CN106959761A (en) | 2017-07-18 |
CN106959761B true CN106959761B (en) | 2019-06-14 |
Family
ID=59484956
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201710253812.0A Active CN106959761B (en) | 2017-04-18 | 2017-04-18 | A kind of terminal photographic method, device and terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106959761B (en) |
Families Citing this family (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107705278B (en) * | 2017-09-11 | 2021-03-02 | Oppo广东移动通信有限公司 | Dynamic effect adding method and terminal equipment |
CN109842764A (en) * | 2017-11-27 | 2019-06-04 | 韩劝劝 | Image shot by cell phone compound platform |
CN108174095A (en) * | 2017-12-28 | 2018-06-15 | 努比亚技术有限公司 | Photographic method, mobile terminal and computer-readable medium based on smiling face's identification |
CN108495031A (en) * | 2018-03-22 | 2018-09-04 | 广东小天才科技有限公司 | A kind of photographic method and wearable device based on wearable device |
CN113518969A (en) * | 2019-05-16 | 2021-10-19 | 深圳市柔宇科技股份有限公司 | Response method for touch area of selfie stick and mobile terminal |
CN112118380B (en) | 2019-06-19 | 2022-10-25 | 北京小米移动软件有限公司 | Camera control method, device, equipment and storage medium |
CN112270302A (en) * | 2020-11-17 | 2021-01-26 | 支付宝(杭州)信息技术有限公司 | Limb control method and device and electronic equipment |
CN112492211A (en) * | 2020-12-01 | 2021-03-12 | 咪咕文化科技有限公司 | Shooting method, electronic equipment and storage medium |
CN113342170A (en) * | 2021-06-11 | 2021-09-03 | 北京字节跳动网络技术有限公司 | Gesture control method, device, terminal and storage medium |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102420942A (en) * | 2011-11-28 | 2012-04-18 | 康佳集团股份有限公司 | Photograph device and photograph control method based on same |
KR20130051319A (en) * | 2011-11-09 | 2013-05-20 | 포항공과대학교 산학협력단 | Apparatus for signal input and method thereof |
CN105824406A (en) * | 2015-11-30 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method and terminal |
CN105915782A (en) * | 2016-03-29 | 2016-08-31 | 维沃移动通信有限公司 | Picture obtaining method based on face identification, and mobile terminal |
CN106303189A (en) * | 2015-05-22 | 2017-01-04 | 上海中兴思秸通讯有限公司 | The method and system of shooting |
-
2017
- 2017-04-18 CN CN201710253812.0A patent/CN106959761B/en active Active
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR20130051319A (en) * | 2011-11-09 | 2013-05-20 | 포항공과대학교 산학협력단 | Apparatus for signal input and method thereof |
CN102420942A (en) * | 2011-11-28 | 2012-04-18 | 康佳集团股份有限公司 | Photograph device and photograph control method based on same |
CN106303189A (en) * | 2015-05-22 | 2017-01-04 | 上海中兴思秸通讯有限公司 | The method and system of shooting |
CN105824406A (en) * | 2015-11-30 | 2016-08-03 | 维沃移动通信有限公司 | Photographing method and terminal |
CN105915782A (en) * | 2016-03-29 | 2016-08-31 | 维沃移动通信有限公司 | Picture obtaining method based on face identification, and mobile terminal |
Also Published As
Publication number | Publication date |
---|---|
CN106959761A (en) | 2017-07-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106959761B (en) | A kind of terminal photographic method, device and terminal | |
CN106778585B (en) | A kind of face key point-tracking method and device | |
CN106357897B (en) | The acquisition methods and device of drop information | |
CN105487649B (en) | A kind of reminding method and mobile terminal | |
CN106445339B (en) | A kind of method and apparatus that double screen terminal shows stereo-picture | |
CN104967790B (en) | Method, photo taking, device and mobile terminal | |
CN110248254A (en) | Display control method and Related product | |
CN106371086B (en) | A kind of method and apparatus of ranging | |
CN107122054A (en) | A kind of detection method and device of face deflection angle and luffing angle | |
CN108989672A (en) | A kind of image pickup method and mobile terminal | |
CN110213440A (en) | A kind of images share method and terminal | |
CN108920059A (en) | Message treatment method and mobile terminal | |
CN108958587B (en) | Split screen processing method and device, storage medium and electronic equipment | |
CN107786811B (en) | A kind of photographic method and mobile terminal | |
CN103714161A (en) | Image thumbnail generation method and device and terminal | |
CN110300267A (en) | Photographic method and terminal device | |
CN110209245A (en) | Face identification method and Related product | |
CN109407948A (en) | A kind of interface display method and mobile terminal | |
CN109493821A (en) | Screen brightness regulation method, device and storage medium | |
CN105635553B (en) | Image shooting method and device | |
CN108664288A (en) | A kind of image interception method and mobile terminal | |
CN106131402B (en) | A kind of self-shooting bar, photographic method and self-heterodyne system | |
CN106210514B (en) | It takes pictures the method, apparatus and smart machine of focusing | |
CN107085718B (en) | Method for detecting human face and device, computer equipment, computer readable storage medium | |
CN109656431A (en) | Information display method, device and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant after: OPPO Guangdong Mobile Communications Co., Ltd. Address before: 523860 No. 18, Wu Sha Beach Road, Changan Town, Dongguan, Guangdong Applicant before: Guangdong OPPO Mobile Communications Co., Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |