CN106503658A - automatic photographing method and mobile terminal - Google Patents
automatic photographing method and mobile terminal Download PDFInfo
- Publication number
- CN106503658A CN106503658A CN201610932153.9A CN201610932153A CN106503658A CN 106503658 A CN106503658 A CN 106503658A CN 201610932153 A CN201610932153 A CN 201610932153A CN 106503658 A CN106503658 A CN 106503658A
- Authority
- CN
- China
- Prior art keywords
- image
- captured
- expression
- classification
- facial
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/161—Detection; Localisation; Normalisation
- G06V40/166—Detection; Localisation; Normalisation using acquisition arrangements
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/172—Classification, e.g. identification
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Human Computer Interaction (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Data Mining & Analysis (AREA)
- Bioinformatics & Cheminformatics (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Studio Devices (AREA)
Abstract
The embodiment of the invention discloses automatic photographing method, the automatic photographing method includes:Image to be captured is gathered by photographic head;If the image to be captured includes face, the facial image of the image to be captured is obtained;The facial image is recognized according to grader, classification of expressing one's feelings is obtained;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot.The invention also discloses a kind of corresponding mobile terminal.Automatic photographing method disclosed in the embodiment of the present invention, it is achieved that carry out automatic camera based on Expression Recognition, operation of on the one hand autodyning is more convenient, on the other hand also better ensure that effect of taking pictures, and obtains more preferable Consumer's Experience.
Description
Technical field
The present invention relates to mobile communication technology field, more particularly to automatic photographing method and mobile terminal.
Background technology
As user is more and more stronger to the dependence of portable mobile termianl, especially mobile phone, wherein, mobile terminal is applied
Being taken pictures also becomes the daily life custom of user.With the front camera and rear camera of mobile terminal popularization and customer group to autodyne
Generally have deep love for, intelligence autos heterodyne be set to user evaluation mobile terminal ease of use one side.At present, take pictures process
Middle Mobile phone automatic focusing, identification facial image region technology ripe, user can use mobile terminal virtual key,
Physical button, acoustic control control to take pictures using the real-time manual operation mode such as self-shooting bar.Further, it is also possible to use time delay
The technology of candid photograph is completing to autodyne.
However, it is desirable to, there is variety of problems in the shooting technology of manual operation.As mobile terminal screen is increasing, lead to
The virtual key or physical button for crossing mobile terminal carries out taking pictures and has unhandy problem;Using when acoustic control, due to
User's triggering sound-controlled apparatus will necessarily produce expression conversion, therefore can not preferably capture the expression that user is intended by;And from
The problem of dabber is then inconvenient to carry.In addition, technology is captured in so-called time delay, after referring to the good photo opporunity of user setup, mobile
Terminal carries out automatic camera in given time.However, the problem that technology is captured in the time delay is that one side user needs to wait one
The fixed time, expression in the candid photograph time that another aspect user is difficult to, is accurately posed and controlled, affect auto heterodyne effect
And Consumer's Experience.
Content of the invention
A kind of automatic photographing method and mobile terminal is embodiments provided, to solve to be difficult in prior art
While easily carrying out autodyning operation, and can guarantee that the problem of effect of taking pictures.
On the one hand, the embodiment of the present invention provides automatic photographing method, and which is applied to mobile terminal, and the method includes:
Image to be captured is gathered by photographic head;
If the image to be captured includes face, the facial image of the image to be captured is obtained;
The facial image is recognized according to grader, classification of expressing one's feelings is obtained;
Judge whether the expression classification is matched with preset expression classification;
If so, the image to be captured is shot.
On the other hand, the embodiment of the present invention additionally provides a kind of mobile terminal, including:
Acquisition module, for gathering image to be captured by photographic head;
Acquisition module, if including face for the image to be captured, obtains the facial image of the image to be captured;
Identification module, for recognizing the facial image according to grader, obtains classification of expressing one's feelings;
Judge module, for judging whether the expression classification is matched with preset expression classification;
Taking module, for shooting the image to be captured.
Automatic photographing method provided in an embodiment of the present invention, gathers image to be captured by photographic head;If described to be captured
Image includes face, obtains the facial image of the image to be captured;The facial image is recognized according to grader, is expressed one's feelings
Classification;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot, it is achieved that
Automatic camera is carried out based on Expression Recognition, operation of on the one hand autodyning is more convenient, on the other hand also better ensure that effect of taking pictures
Really, more preferable Consumer's Experience is obtained.
Description of the drawings
For the technical scheme being illustrated more clearly that in the embodiment of the present invention, below will be to institute in embodiment of the present invention description
The accompanying drawing for using is needed to be briefly described, it should be apparent that, drawings in the following description are only some enforcements of the present invention
The accompanying drawing of example, for those of ordinary skill in the art, on the premise of not paying creative work, can be with according to these
Accompanying drawing obtains other accompanying drawings.
Fig. 1 is the flow chart of the first embodiment of automatic photographing method of the present invention;
Fig. 2 is the flow chart of the second embodiment of automatic photographing method of the present invention;
Fig. 3 is the structured flowchart of the first embodiment of the mobile terminal of the present invention;
Fig. 4 is the structured flowchart of the second embodiment of the mobile terminal of the present invention;
Fig. 5 is the structured flowchart of the 3rd embodiment of the mobile terminal of the present invention.
Specific embodiment
Accompanying drawing in below in conjunction with the embodiment of the present invention, to the embodiment of the present invention in technical scheme carry out clear, complete
Site preparation is described, it is clear that described embodiment is a part of embodiment of the invention, rather than whole embodiments.It is based on this
Embodiment in bright, the every other enforcement obtained under the premise of creative work is not made by those of ordinary skill in the art
Example, belongs to the scope of protection of the invention.
First embodiment
As shown in figure 1, being the flow chart of the first embodiment of automatic photographing method of the present invention.The automatic photographing method bag
Include:
Step 101, gathers image to be captured by photographic head.
The switch of automatic camera, in the embodiment of the present invention, can also be set in the terminal, make the user can be according to need
The function of automatic camera to be turned on and off.Specifically, can be before image to be captured be gathered by photographic head, opening should be certainly
The dynamic switch that takes pictures, to start automatic photo function.
Step 102, if the image to be captured includes face, obtains the facial image of the image to be captured.
In the embodiment of the present invention, can be judged with the presence or absence of face in the image to be captured by face recognition technology, if
Exist, then obtain the facial image in image to be captured
In this step, when the image to be captured includes at least two faces, also include:
Obtain the facial image of owner in the image to be captured.
Specifically, the facial image for owner can be discerned whether by face feature identification technique.Wherein it is possible to
Setting is carried out in grader, the expression of an identifying user (grader can be trained by user oneself herein, and with face
Technology of identification is realized), or choose and recognize wherein most sockdolager's face (such as face at picture center etc.), with the table of user
The expression of feelings or most prominent face foundation of automatic camera as whether.Certainly, the function can open and go oneself to set to user
Put, acquiescence is to close.
Step 103, recognizes the facial image according to grader, obtains classification of expressing one's feelings.
In the embodiment of the present invention, after obtaining facial image, the people is identified and is obtained by human facial expression recognition technology
The expression classification of face.
In this step, when the image to be captured includes at least two faces, also include:
According to the facial image that grader recognizes the owner, the corresponding expression classification of facial image of owner is obtained.
Step 104, judges whether the expression classification is matched with preset expression classification.
In the embodiment of the present invention, judge whether the expression classification is matched with preset expression classification according to grader.Wherein,
The grader can be default, and preset expression classification can also be default.
Step 105, if so, shoots the image to be captured.
In the embodiment of the present invention, when human face expression is matched with preset expression classification, then automatic photo function is triggered,
The lower to be captured image is now shot, wherein it is possible to shooting one can also shoot multiple.
Automatic photographing method provided in an embodiment of the present invention, gathers image to be captured by photographic head;If described to be captured
Image includes face, obtains the facial image of the image to be captured;The facial image is recognized according to grader, is expressed one's feelings
Classification;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot, it is achieved that
Automatic camera is carried out based on Expression Recognition, operation of on the one hand autodyning is more convenient, on the other hand also better ensure that effect of taking pictures
Really, more preferable Consumer's Experience is obtained.
Second embodiment
As shown in Fig. 2 being the flow chart of the second embodiment of automatic photographing method of the present invention.The automatic photographing method bag
Include:
Step 201, storage facial expression image is to the grader.
In the embodiment of the present invention, grader can be trained by user oneself, it is many that the grader after the training can be used for identification
The facial expression of people, it can also be used to the facial expression of only identifying user.Specifically, can be by photographic head collection expression figure
Facial expression image in picture or identification picture, and will store to grader to facial expression image.
Step 202, sets the preset expression classification based on the facial expression image, and the preset expression classification includes height
One or more in emerging, sad, frightened, surprised, angry, detest, disappointed and neutrality.
In the embodiment of the present invention, the preset expression classification for matching is set according to facial expression image, during shooting, meets these pre-
The human face expression for putting expression classification will trigger automatic photo function.
Step 203, gathers image to be captured by photographic head.
Step 204, if the image to be captured includes face, obtains the facial image of the image to be captured.
Step 203 is identical with the corresponding steps of the first embodiment of automatic photographing method of the present invention to step 204, herein not
Repeat again.
In the embodiment of the present invention, when the image to be captured includes at least two faces, including:
Step 205, according to grader identification, the image of at least two faces, obtains face in image to be captured
The corresponding expression class elements of difference.
The feature of human face expression in the embodiment of the present invention, is extracted, and is contrasted with the grader, to complete human face expression
Identification, according to the result of categorized device identification, generate corresponding with human face expression distribution expression class elements, as glad, sad
Wound, frightened, surprised, angry, detest, disappointed and neutral etc..
Step 206, counts the expression class elements according to Query-by-Committee algorithm.
In the embodiment of the present invention, according to the poll that Query-by-Committee algorithm counts expression class elements.
Step 207, determines that the most expression class elements of poll are the expression classification.
In the embodiment of the present invention, as, under the scene more than people, expression is not readily coordinated, if according only to wherein someone
Expression result carry out automatic camera and may compare disorderly, expression class elements most for poll are defined as class of expressing one's feelings therefore
Not, as the foundation of automatic camera.
Step 208, judges whether the expression classification is matched with preset expression classification.
Step 209, if so, shoots the image to be captured.
Step 208 is identical with the corresponding steps of the first embodiment of automatic photographing method of the present invention to step 209, herein not
Repeat again.
Step 210, chooses definition highest shooting image.
For being further ensured that shooting effect, then definition highest one is chosen in multiple for shooting shooting image.
Specifically, can be that each human face expression shoots multiple images and chooses 1 therein.
Step 211, preserves the shooting image after choosing.
In the embodiment of the present invention, by selection after shooting image preserve.
Automatic photographing method provided in an embodiment of the present invention, stores facial expression image to the grader, based on the expression
Preset expression classification described in image setting, the preset expression classification include happiness, sadness, fear, surprised, angry, detest, mistake
One or more in hoping and being neutral, it is achieved that grader is trained by user, interest and hommization is embodied;Choose definition
Highest shooting image, preserves the shooting image after choosing, can select after the picture is taken to preserve more visible image;According to
The image of at least two faces described in the grader identification, obtains the corresponding expression classification unit of face difference in image to be captured
Element, counts the expression class elements according to Query-by-Committee algorithm, determines that the most expression class elements of poll are institute
State expression classification, it is achieved that also shooting effect is ensure that in the case where many people autodyne, improve Consumer's Experience.
Above the embodiment of the display packing of mobile terminal of the present invention has been made to be discussed in detail.Above-mentioned side will correspond to below
The device (i.e. mobile terminal) of method is further elaborated.Wherein, mobile terminal can be mobile phone, panel computer, MP3 or MP4 etc..
3rd embodiment
As shown in figure 3, the structured flowchart of the first embodiment for mobile terminal of the present invention.The mobile terminal 300 can be realized
Each step of the first embodiment of the automatic photographing method of the present invention, wherein, mobile terminal 300 includes acquisition module 301, obtains
Module 302, identification module 303, judge module 304 and taking module 305.
Acquisition module 301, is connected with acquisition module 302, for gathering image to be captured by photographic head.
The switch of automatic camera, in the embodiment of the present invention, can also be set in the terminal, make the user can be according to need
The function of automatic camera to be turned on and off.Specifically, can be before image to be captured be gathered by photographic head, opening should be certainly
The dynamic switch that takes pictures, to start automatic photo function.
Acquisition module 302, is connected with identification module 303, if including face for the image to be captured, is obtained described
The facial image of image to be captured.
In the embodiment of the present invention, whether acquisition module 302 can be judged in the image to be captured by face recognition technology
There is face, if existing, obtain the facial image in image to be captured
Wherein, when the image to be captured includes at least two faces, acquisition module 302 also includes:
Acquiring unit, for obtaining the facial image of owner in the image to be captured.
Specifically, the facial image for owner can be discerned whether by face feature identification technique.Wherein it is possible to
Setting is carried out in grader, the expression of an identifying user (grader can be trained by user oneself herein, and with face
Technology of identification is realized), or choose and recognize wherein most sockdolager's face (such as face at picture center etc.), with the table of user
The expression of feelings or most prominent face foundation of automatic camera as whether.Certainly, the function can open and go oneself to set to user
Put, acquiescence is to close.
Identification module 303, is connected with judge module 304, for recognizing the facial image according to grader, obtains table
Feelings classification.
In the embodiment of the present invention, after obtaining facial image, identification module 303 is identified by human facial expression recognition technology
And obtain the expression classification of the face.
Wherein, when the image to be captured includes at least two faces, identification module 303 also includes:
According to the facial image that grader recognizes the owner, the corresponding expression classification of facial image of owner is obtained.
Judge module 304, is connected with taking module 305, for judge the expression classification whether with preset expression class
Do not match.
In the embodiment of the present invention, judge module 304 according to grader judge the expression classification whether with preset expression classification
Match.Wherein, the grader can be default, and preset expression classification can also be default.
Taking module 405, for shooting the image to be captured.
In the embodiment of the present invention, when human face expression is matched with preset expression classification, then automatic photo function is triggered,
The lower to be captured image is now shot, wherein it is possible to shooting one can also shoot multiple.
Mobile terminal provided in an embodiment of the present invention, gathers image to be captured by photographic head;If the image to be captured
Including face, the facial image of the image to be captured is obtained;The facial image is recognized according to grader, class of expressing one's feelings is obtained
Not;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot, it is achieved that base
Automatic camera is carried out in Expression Recognition, operation of on the one hand autodyning is more convenient, on the other hand also better ensure that effect of taking pictures,
Obtain more preferable Consumer's Experience.
Fourth embodiment
As shown in figure 4, the structured flowchart of the second embodiment for mobile terminal of the present invention.The mobile terminal 400 can be realized
Each step of the second embodiment of the automatic photographing method of the present invention, wherein, mobile terminal 400 includes memory module 401, sets
Module 402, acquisition module 403, acquisition module 404, identification module 405, judge module 406, taking module 407, selection module
408 and preserving module 409.
Memory module 401, is connected with setting module 402, for storing facial expression image to the grader.
In the embodiment of the present invention, grader can be trained by user oneself, it is many that the grader after the training can be used for identification
The facial expression of people, it can also be used to the facial expression of only identifying user.Specifically, can be by photographic head collection expression figure
Facial expression image in picture or identification picture, and will store to grader to facial expression image.
Setting module 402, is connected with acquisition module 403, for setting the preset expression based on the facial expression image
Classification, the preset expression classification include the one kind or many in happiness, sadness, fear, surprised, angry, detest, disappointment and neutrality
Kind.
In the embodiment of the present invention, setting module 402 sets the preset expression classification for matching according to facial expression image, shoots
When, the human face expression for meeting these preset expression classifications will trigger automatic photo function.
Acquisition module 403, is connected with acquisition module 404, for gathering image to be captured by photographic head.
Acquisition module 404, is connected with identification module 405, if including face for the image to be captured, is obtained described
The facial image of image to be captured.
Acquisition module 403 is identical with the corresponding module of the first embodiment of mobile terminal of the present invention with acquisition module 404, this
Place repeats no more.
Identification module 405, is connected with judge module 406, for recognizing the facial image according to grader, obtains table
Feelings classification.
Wherein, identification module 405 is also included with lower unit:
Second recognition unit 4051, is connected with statistic unit 4052, for according to the grader identification described at least
The image of two faces, obtains face in image to be captured and distinguishes corresponding expression class elements.
In the embodiment of the present invention, the second recognition unit 4051 extract human face expression feature, and carry out with the grader right
Than to complete the identification of human face expression, according to the result of categorized device identification, generating expression class corresponding with human face expression distribution
Other element, such as glad, sad, frightened, surprised, angry, detest, disappointed and neutral etc..
Statistic unit 4052, is connected with determining unit 4053, for according to Query-by-Committee algorithm statistics
Expression class elements.
In the embodiment of the present invention, statistic unit 4052 counts the ticket of expression class elements according to Query-by-Committee algorithm
Number.
Determining unit 4053, for determining that the most expression class elements of poll are the expression classification.
In the embodiment of the present invention, as, under the scene more than people, expression is not readily coordinated, if according only to wherein someone
Expression result carry out automatic camera and may compare disorderly, expression class elements most for poll are defined as class of expressing one's feelings therefore
Not, as the foundation of automatic camera.
Judge module 406, is connected with taking module 407, for judge the expression classification whether with preset expression class
Do not match.
Taking module 407, is connected with module 408 is chosen, for shooting the image to be captured.
Judge module 406 is identical with the corresponding module of the first embodiment of mobile terminal of the present invention with taking module 407, this
Place repeats no more.
Module 408 is chosen, is connected with preserving module 409, for choosing definition highest shooting image.
For being further ensured that shooting effect, choose module 408 and definition is chosen most in multiple for shooting shooting image
High one.Specifically, can be that each human face expression shoots multiple images and chooses 1 therein.
Preserving module 409, for preserving the shooting image after choosing.
In the embodiment of the present invention, by selection after shooting image preserve.
Automatic photographing method provided in an embodiment of the present invention, stores facial expression image to the grader, based on the expression
Preset expression classification described in image setting, the preset expression classification include happiness, sadness, fear, surprised, angry, detest, mistake
One or more in hoping and being neutral, it is achieved that grader is trained by user, interest and hommization is embodied;Choose definition
Highest shooting image, preserves the shooting image after choosing, can select after the picture is taken to preserve more visible image;According to
The image of at least two faces described in the grader identification, obtains the corresponding expression classification unit of face difference in image to be captured
Element, counts the expression class elements according to Query-by-Committee algorithm, determines that the most expression class elements of poll are institute
State expression classification, it is achieved that also shooting effect is ensure that in the case where many people autodyne, improve Consumer's Experience.
5th embodiment
Fig. 5 is the structured flowchart of the 3rd embodiment of mobile terminal of the present invention.Mobile terminal 800 shown in Fig. 5 includes:Extremely
Few processor 801, memorizer 802, at least one network interface 804, user interface 803 and other assemblies 806, other groups
Part 806 includes eyeball tracking sensor and front-facing camera.Each component in mobile terminal 800 passes through 805 coupling of bus system
It is combined.It is understood that bus system 805 is used for realizing the connection communication between these components.Bus system 805 is removed includes number
Outside according to bus, also include power bus, controlling bus and status signal bus in addition.But for the sake of for clear explanation, in Figure 5
Various buses are all designated as bus system 805.
Wherein, user interface 803 can include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memorizer 802 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read only memory (Read-
Only Memory, ROM), programmable read only memory (Programmable ROM, PROM), the read-only storage of erasable programmable
Device (Erasable PROM, EPROM), Electrically Erasable Read Only Memory (Electrically EPROM, EEPROM) or
Flash memory.Volatile memory can be random access memory (Random Access Memory, RAM), and which is used as outside height
Speed caching.By exemplary but be not restricted explanation, the RAM of many forms can use, such as static RAM
(Static RAM, SRAM), dynamic random access memory (Dynamic RAM, DRAM), Synchronous Dynamic Random Access Memory
(Synchronous DRAM, SDRAM), double data speed synchronous dynamic RAM (Double Data Rate
SDRAM, DDRSDRAM), enhancement mode Synchronous Dynamic Random Access Memory (Enhanced SDRAM, ESDRAM), synchronized links
Dynamic random access memory (Synchlink DRAM, SLDRAM) and direct rambus random access memory (Direct
Rambus RAM, DRRAM).The embodiment of the present invention description system and method memorizer 802 be intended to including but not limited to these
Memorizer with any other suitable type.
In some embodiments, memorizer 802 stores following element, can perform module or data structure, or
Person their subset, or their superset:Operating system 8021 and application program 8022.
Wherein, operating system 8021, comprising various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and process hardware based task.Application program 8022, comprising various application programs, such as media
Player (Media Player), browser (Browser) etc., for realizing various applied business.Realize the embodiment of the present invention
The program of method is may be embodied in application program 8022.
In embodiments of the present invention, by call memorizer 802 storage program or instruction, specifically, can be application
The program stored in program 8022 or instruction, processor 801 are used for gathering image to be captured by photographic head;If described to be captured
Image includes face, obtains the facial image of the image to be captured;The facial image is recognized according to grader, is expressed one's feelings
Classification;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot.
The method that the embodiments of the present invention are disclosed is can apply in processor 801, or is realized by processor 801.
A kind of possibly IC chip of processor 801, the disposal ability with signal.During realization, said method each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 801 or software form.Above-mentioned process
Device 801 can be general processor, digital signal processor (Digital Signal Processor, DSP), special integrated electricity
Road (Application Specific Integrated Circuit, ASIC), ready-made programmable gate array (Field
Programmable Gate Array, FPGA) or other PLDs, discrete gate or transistor logic,
Discrete hardware components.Can realize or execute disclosed each method in the embodiment of the present invention, step and logic diagram.General
Processor can be microprocessor or the processor can also be any conventional processor etc..In conjunction with embodiment of the present invention institute
The step of disclosed method, can be embodied directly in hardware decoding processor and execute and complete, or with the hardware in decoding processor
And software module combination execution is completed.Software module may be located at random access memory, and flash memory, read only memory may be programmed read-only
In the ripe storage medium in this area such as memorizer or electrically erasable programmable memory, depositor.The storage medium is located at
Memorizer 802, processor 801 read the information in memorizer 802, the step of complete said method in conjunction with its hardware.
It is understood that the embodiment of the present invention description these embodiments can use hardware, software, firmware, middleware,
Microcode or its combine realizing.For hardware is realized, processing unit can be realized in one or more special ICs
(Application Specific Integrated Circuits, ASIC), digital signal processor (Digital Signal
Processing, DSP), digital signal processing appts (DSP Device, DSPD), programmable logic device (Programmable
Logic Device, PLD), field programmable gate array (Field-Programmable Gate Array, FPGA), general place
Reason device, controller, microcontroller, microprocessor, for execute herein described function other electronic units or its combine in.
For software realize, can pass through execute the embodiment of the present invention described in function module (such as process, function etc.) come
Realize the technology described in the embodiment of the present invention.Software code is storable in memorizer and by computing device.Memorizer can
To realize within a processor or outside the processor.
Alternatively, processor 801 is additionally operable to:Obtain the facial image of owner in the image to be captured;According to grader
The facial image of the owner is recognized, the corresponding expression classification of facial image of owner is obtained.
Alternatively, processor 801 is additionally operable to:According to grader identification, the image of at least two faces, obtains
In image to be captured, face distinguishes corresponding expression class elements;The expression classification is counted according to Query-by-Committee algorithm
Element;Determine that the most expression class elements of poll are the expression classification.
Alternatively, processor 801 is additionally operable to:The facial expression image is stored to the grader;It is based on the facial expression image
Set the preset expression classification, the preset expression classification include happiness, sadness, fear, surprised, angry, detest, disappointed and
One or more in neutrality.
Alternatively, processor 801 is additionally operable to:Choose definition highest shooting image;Preserve the shooting after choosing
Image.
Mobile terminal 800 can realize each process that mobile terminal is realized in previous embodiment, for avoiding repeating, here
Repeat no more.
Mobile terminal provided in an embodiment of the present invention 800, gathers image to be captured by photographic head;If the figure to be captured
As including face, the facial image of the image to be captured is obtained;The facial image is recognized according to grader, class of expressing one's feelings is obtained
Not;Judge whether the expression classification is matched with preset expression classification;If so, the image to be captured is shot, it is achieved that base
Automatic camera is carried out in Expression Recognition, operation of on the one hand autodyning is more convenient, on the other hand also better ensure that effect of taking pictures,
Obtain more preferable Consumer's Experience.
Those of ordinary skill in the art are it is to be appreciated that with reference to each of the embodiment description disclosed in the embodiment of the present invention
The unit and algorithm steps of example, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These
Function is executed with hardware or software mode actually, the application-specific and design constraint depending on technical scheme.Specialty
Technical staff can use different methods to realize described function to each specific application, but this realization should
Think beyond the scope of this invention.
Those skilled in the art can be understood that, for convenience and simplicity of description, the system of foregoing description,
Device and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be described here.
In embodiment provided herein, it should be understood that disclosed apparatus and method, can pass through other
Mode is realized.For example, device embodiment described above is only that schematically for example, the division of the unit is only
A kind of division of logic function, can have when actually realizing other dividing mode, for example multiple units or component can in conjunction with or
Person is desirably integrated into another system, or some features can be ignored, or does not execute.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be INDIRECT COUPLING or communication link by some interfaces, device or unit
Connect, can be electrical, mechanical or other forms.
The unit that illustrates as separating component can be or may not be physically separate, aobvious as unit
The part for showing can be or may not be physical location, you can be located at a place, or can also be distributed to multiple
On NE.Some or all of unit therein can be selected according to the actual needs to realize the mesh of this embodiment scheme
's.
In addition, each functional unit in each embodiment of the invention can be integrated in a processing unit, it is also possible to
It is that unit is individually physically present, it is also possible to which two or more units are integrated in a unit.
If the function is realized using in the form of SFU software functional unit and as independent production marketing or when using, can be with
It is stored in a computer read/write memory medium.Such understanding is based on, technical scheme is substantially in other words
The part contributed by prior art or the part of the technical scheme can be embodied in the form of software product, the meter
Calculation machine software product is stored in a storage medium, is used including some instructions so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) execute all or part of step of each embodiment methods described of the invention.
And aforesaid storage medium includes:USB flash disk, portable hard drive, ROM, RAM, magnetic disc or CD etc. are various can be with store program codes
Medium.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, should all be contained
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (10)
1. a kind of automatic photographing method, it is characterised in that include:
Image to be captured is gathered by photographic head;
If the image to be captured includes face, the facial image of the image to be captured is obtained;
The facial image is recognized according to grader, classification of expressing one's feelings is obtained;
Judge whether the expression classification is matched with preset expression classification;
If so, the image to be captured is shot.
2. method according to claim 1, it is characterised in that the image to be captured includes at least two faces, described
Obtain the image to be captured facial image the step of, including:
Obtain the facial image of owner in the image to be captured;
Described the facial image is recognized according to grader, obtain expressing one's feelings classification the step of, including:
According to the facial image that grader recognizes the owner, the corresponding expression classification of facial image of owner is obtained.
3. method according to claim 1, it is characterised in that the image to be captured includes at least two faces, described
The human face expression is recognized according to grader, obtain expressing one's feelings classification the step of, including:
The image of at least two faces according to grader identification, obtains face in image to be captured and distinguishes corresponding table
Feelings class elements;
The expression class elements are counted according to Query-by-Committee algorithm;
Determine that the most expression class elements of poll are the expression classification.
4. method according to claim 1, it is characterised in that described image to be captured is gathered by photographic head the step of
Before, also include:
Storage facial expression image is to the grader;
The preset expression classification is set based on the facial expression image, the preset expression classification include happiness, sadness, fear,
One or more in surprised, angry, detest, disappointed and neutrality.
5. method according to claim 1, it is characterised in that after the step of the shooting image to be captured,
Also include:
Choose definition highest shooting image;
Preserve the shooting image after choosing.
6. a kind of mobile terminal, it is characterised in that include:
Acquisition module, for gathering image to be captured by photographic head;
Acquisition module, if including face for the image to be captured, obtains the facial image of the image to be captured;
Identification module, for recognizing the facial image according to grader, obtains classification of expressing one's feelings;
Judge module, for judging whether the expression classification is matched with preset expression classification;
Taking module, for shooting the image to be captured.
7. mobile terminal according to claim 6, it is characterised in that the image to be captured includes at least two faces,
The acquisition module includes:
Acquiring unit, for obtaining the facial image of owner in the image to be captured;
The identification module includes:
First recognition unit, for recognizing the facial image of the owner according to grader, the facial image for obtaining owner is corresponding
Expression classification.
8. mobile terminal according to claim 6, it is characterised in that the image to be captured includes at least two faces,
The identification module includes:
Second recognition unit, for the image of at least two faces according to grader identification, obtains image to be captured
Middle face distinguishes corresponding expression class elements;
Statistic unit, for counting the expression class elements according to Query-by-Committee algorithm;
Determining unit, for determining that the most expression class elements of poll are the expression classification.
9. mobile terminal according to claim 6, it is characterised in that also include:
Memory module, for storing facial expression image to the grader;
Setting module, for setting the preset expression classification based on the facial expression image, the preset expression classification includes height
One or more in emerging, sad, frightened, surprised, angry, detest, disappointed and neutrality.
10. mobile terminal according to claim 6, it is characterised in that also include:
Module is chosen, for choosing definition highest shooting image;
Preserving module, for preserving the shooting image after choosing.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610932153.9A CN106503658A (en) | 2016-10-31 | 2016-10-31 | automatic photographing method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610932153.9A CN106503658A (en) | 2016-10-31 | 2016-10-31 | automatic photographing method and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106503658A true CN106503658A (en) | 2017-03-15 |
Family
ID=58319625
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610932153.9A Pending CN106503658A (en) | 2016-10-31 | 2016-10-31 | automatic photographing method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106503658A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107592457A (en) * | 2017-09-08 | 2018-01-16 | 维沃移动通信有限公司 | A kind of U.S. face method and mobile terminal |
CN107786811A (en) * | 2017-10-20 | 2018-03-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN108366199A (en) * | 2018-02-01 | 2018-08-03 | 海尔优家智能科技(北京)有限公司 | A kind of image-pickup method, device, equipment and computer readable storage medium |
CN108769537A (en) * | 2018-07-25 | 2018-11-06 | 珠海格力电器股份有限公司 | A kind of photographic method, device, terminal and readable storage medium storing program for executing |
CN109831618A (en) * | 2018-12-10 | 2019-05-31 | 平安科技(深圳)有限公司 | Photographic method, computer readable storage medium and terminal device based on expression |
WO2019218111A1 (en) * | 2018-05-14 | 2019-11-21 | 合刃科技(武汉)有限公司 | Electronic device and photographing control method |
CN110493503A (en) * | 2019-08-27 | 2019-11-22 | Oppo广东移动通信有限公司 | A kind of image-pickup method, electronic equipment and storage medium |
CN110598568A (en) * | 2019-08-19 | 2019-12-20 | 重庆特斯联智慧科技股份有限公司 | Scenic spot intelligent photographing system and method based on facial expression classification |
CN112637487A (en) * | 2020-12-17 | 2021-04-09 | 四川长虹电器股份有限公司 | Television intelligent photographing method based on time stack expression recognition |
CN113315904A (en) * | 2020-02-26 | 2021-08-27 | 北京小米移动软件有限公司 | Imaging method, imaging device, and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998038A (en) * | 2009-08-07 | 2011-03-30 | 三星电子株式会社 | Digital photographing apparatus, method of controlling the same |
CN103079034A (en) * | 2013-01-06 | 2013-05-01 | 北京百度网讯科技有限公司 | Perception shooting method and system |
CN103269415A (en) * | 2013-04-16 | 2013-08-28 | 广东欧珀移动通信有限公司 | Automatic photo taking method for face recognition and mobile terminal |
CN104735357A (en) * | 2015-03-27 | 2015-06-24 | 天脉聚源(北京)教育科技有限公司 | Automatic picture shooting method and device |
-
2016
- 2016-10-31 CN CN201610932153.9A patent/CN106503658A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998038A (en) * | 2009-08-07 | 2011-03-30 | 三星电子株式会社 | Digital photographing apparatus, method of controlling the same |
CN103079034A (en) * | 2013-01-06 | 2013-05-01 | 北京百度网讯科技有限公司 | Perception shooting method and system |
CN103269415A (en) * | 2013-04-16 | 2013-08-28 | 广东欧珀移动通信有限公司 | Automatic photo taking method for face recognition and mobile terminal |
CN104735357A (en) * | 2015-03-27 | 2015-06-24 | 天脉聚源(北京)教育科技有限公司 | Automatic picture shooting method and device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107592457B (en) * | 2017-09-08 | 2020-05-15 | 维沃移动通信有限公司 | Beautifying method and mobile terminal |
CN107592457A (en) * | 2017-09-08 | 2018-01-16 | 维沃移动通信有限公司 | A kind of U.S. face method and mobile terminal |
CN107786811B (en) * | 2017-10-20 | 2019-10-15 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN107786811A (en) * | 2017-10-20 | 2018-03-09 | 维沃移动通信有限公司 | A kind of photographic method and mobile terminal |
CN108366199A (en) * | 2018-02-01 | 2018-08-03 | 海尔优家智能科技(北京)有限公司 | A kind of image-pickup method, device, equipment and computer readable storage medium |
WO2019218111A1 (en) * | 2018-05-14 | 2019-11-21 | 合刃科技(武汉)有限公司 | Electronic device and photographing control method |
CN108769537A (en) * | 2018-07-25 | 2018-11-06 | 珠海格力电器股份有限公司 | A kind of photographic method, device, terminal and readable storage medium storing program for executing |
CN109831618A (en) * | 2018-12-10 | 2019-05-31 | 平安科技(深圳)有限公司 | Photographic method, computer readable storage medium and terminal device based on expression |
CN110598568A (en) * | 2019-08-19 | 2019-12-20 | 重庆特斯联智慧科技股份有限公司 | Scenic spot intelligent photographing system and method based on facial expression classification |
CN110493503A (en) * | 2019-08-27 | 2019-11-22 | Oppo广东移动通信有限公司 | A kind of image-pickup method, electronic equipment and storage medium |
CN113315904A (en) * | 2020-02-26 | 2021-08-27 | 北京小米移动软件有限公司 | Imaging method, imaging device, and storage medium |
CN113315904B (en) * | 2020-02-26 | 2023-09-26 | 北京小米移动软件有限公司 | Shooting method, shooting device and storage medium |
CN112637487A (en) * | 2020-12-17 | 2021-04-09 | 四川长虹电器股份有限公司 | Television intelligent photographing method based on time stack expression recognition |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106503658A (en) | automatic photographing method and mobile terminal | |
CN105847674B (en) | A kind of preview image processing method and mobile terminal based on mobile terminal | |
CN105933607B (en) | A kind of take pictures effect method of adjustment and the mobile terminal of mobile terminal | |
CN106210526A (en) | A kind of image pickup method and mobile terminal | |
CN108229369A (en) | Image capturing method, device, storage medium and electronic equipment | |
CN107172296A (en) | A kind of image capturing method and mobile terminal | |
CN107123081A (en) | image processing method, device and terminal | |
CN105915782A (en) | Picture obtaining method based on face identification, and mobile terminal | |
CN110139033A (en) | Camera control method and Related product | |
CN110113515B (en) | Photographing control method and related product | |
CN106937054B (en) | A kind of take pictures weakening method and the mobile terminal of mobile terminal | |
CN105827979B (en) | A kind of method and mobile terminal of shooting prompt | |
CN105376496A (en) | Photographing method and device | |
CN109413326A (en) | Camera control method and Related product | |
CN109639973A (en) | Shoot image methods of marking, scoring apparatus, electronic equipment and storage medium | |
CN108182714A (en) | Image processing method and device, storage medium | |
CN106341608A (en) | Emotion based shooting method and mobile terminal | |
CN105528078B (en) | The method and device of controlling electronic devices | |
CN106303234A (en) | Take pictures processing method and processing device | |
CN107222675A (en) | The photographic method and mobile terminal of a kind of mobile terminal | |
CN110245607B (en) | Eyeball tracking method and related product | |
CN112714257B (en) | Display control method, display control device, electronic device, and medium | |
CN206595991U (en) | A kind of double-camera mobile terminal | |
CN108197585A (en) | Recognition algorithms and device | |
CN106557755A (en) | Fingerprint template acquisition methods and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170315 |