CN106341608A - Emotion based shooting method and mobile terminal - Google Patents
Emotion based shooting method and mobile terminal Download PDFInfo
- Publication number
- CN106341608A CN106341608A CN201610967119.5A CN201610967119A CN106341608A CN 106341608 A CN106341608 A CN 106341608A CN 201610967119 A CN201610967119 A CN 201610967119A CN 106341608 A CN106341608 A CN 106341608A
- Authority
- CN
- China
- Prior art keywords
- data
- image
- facial emotions
- face
- mobile phone
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
- H04N23/61—Control of cameras or camera modules based on recognised objects
- H04N23/611—Control of cameras or camera modules based on recognised objects where the recognised objects include parts of the human body
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Telephone Function (AREA)
- Processing Or Creating Images (AREA)
Abstract
The invention provides an emotion based shooting method. The method comprises that a shooting instruction input by a mobile terminal user is received; a facial emotion of the mobile terminal user is detected; when the detected facial emotion of the mobile terminal user matches an emotion feature in a preset emotion feature library, a camera is controlled to carry out shooting; first data, including the facial emotion, shot by the camera is obtained; and pre-stored second data which matches the facial emotion is synthesized with the first data to generate target data. According to the emotion based shooting method, the user can adds emotion content which matches the facial emotion of the user at present to a shot picture or image on the basis of the identified facial emotion of the user, an interesting picture or video is formed, and shooting of the mobile terminal can be more interesting.
Description
Technical field
The present invention relates to communication technical field, more particularly, to a kind of image pickup method based on emotion and mobile terminal.
Background technology
Development with intelligent mobile terminal and people life use frequent, increasing traditional function is dug
Pick, the demand different to meet user.Mobile terminal can meet the demand of taking pictures of user at present, such as can be right according to the demand of user
Photo is beautified, toning, adds the operation such as word.Depending on mood when these user operations are taken pictures according to user mostly, by
Increasing in the function of portable photographing device, it is supplied to the selection of user and also relatively increases, whenever user selects to take pictures mould
During formula, need to browse numerous functional modes and just can select the exposal model meeting oneself needs, thus the selection in exposal model
Process operation is loaded down with trivial details, time-consuming.
Content of the invention
The embodiment of the present invention provides a kind of image pickup method based on emotion, to solve the selection course of existing exposal model
Complex operation, time-consuming problem.
A kind of first aspect, there is provided image pickup method based on emotion, methods described is applied to mobile terminal, methods described
Including:
Receive the shooting instruction of mobile phone users input;
The facial emotions of detection mobile phone users;
When the emotional characteristicss in facial emotions mobile phone users is detected and default emotional characteristicss storehouse match,
Photographic head is controlled to be shot;
Obtain the first data of the described facial emotions of inclusion that described photographic head shoots;
Extract the second data mated with described facial emotions prestoring to be synthesized with described first data, generate
Target data.
Second aspect, additionally provides a kind of mobile terminal, comprising:
First receiver module, for receiving the shooting instruction of mobile phone users input;
First detection module, for detecting the facial emotions of mobile phone users;
Taking module, for when the emotion in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When feature matches, photographic head is controlled to be shot;
First acquisition module, for obtaining the first data of the described facial emotions of inclusion that described photographic head shoots;
Synthesis module, for extracting the second data mated with described facial emotions prestoring and described first data
Synthesized, generated target data.
So, in the embodiment of the present invention, by receiving the shooting instruction of mobile phone users input;Detection mobile terminal is used
The facial emotions at family;When the emotional characteristicss phase in the facial emotions of mobile phone users and default emotional characteristicss storehouse is detected
Timing, controls photographic head to be shot;Obtain the first data of the described facial emotions of inclusion that described photographic head shoots;Extract pre-
The second data mated with described facial emotions first storing and described first data are synthesized, and generate target data, user
Can directly according to current emotion, in the photo that the camera using mobile terminal shoots or video, quickly directly add with
The consistent emotional content of current emotional, is synthesized with photo or video, thus forming interesting shooting works, increased and take pictures
The interest of function and the individual sexual experience of user, and it is loaded down with trivial details on selecting exposal model to avoid user.
Brief description
In order to be illustrated more clearly that the technical scheme of the embodiment of the present invention, below by institute in the description to the embodiment of the present invention
Need use accompanying drawing be briefly described it should be apparent that, drawings in the following description be only the present invention some enforcement
Example, for those of ordinary skill in the art, without having to pay creative labor, can also be according to these accompanying drawings
Obtain other accompanying drawings.
Fig. 1 is a kind of flow chart of image pickup method based on emotion of first embodiment of the invention.
Fig. 2 is a kind of one of flow chart of image pickup method based on emotion of second embodiment of the invention.
Fig. 2 a is the two of the flow chart of a kind of image pickup method based on emotion of second embodiment of the invention.
Fig. 2 b is the three of the flow chart of a kind of image pickup method based on emotion of second embodiment of the invention.
Fig. 3 is a kind of structure chart of mobile terminal of third embodiment of the invention.
Fig. 4 is a kind of one of structure chart of mobile terminal of third embodiment of the invention.
Fig. 4 a is the two of the structure chart of a kind of mobile terminal of third embodiment of the invention.
Fig. 4 b is the three of the structure chart of a kind of mobile terminal of third embodiment of the invention.
Fig. 5 is a kind of structure chart of mobile terminal of fourth embodiment of the invention.
Fig. 6 is a kind of structure chart of mobile terminal of fourth embodiment of the invention.
Specific embodiment
Below in conjunction with the accompanying drawing in the embodiment of the present invention, the technical scheme in the embodiment of the present invention is carried out clear, complete
Site preparation description is it is clear that described embodiment a part of embodiment that is the present invention, rather than whole embodiments.Based on this
Embodiment in bright, the every other enforcement that those of ordinary skill in the art are obtained under the premise of not making creative work
Example, broadly falls into the scope of protection of the invention.
First embodiment
A kind of image pickup method based on emotion that the present invention provides, as shown in Figure 1, comprising:
Step 101, receives the shooting instruction of mobile phone users input.
In this step, when receiving the instruction that active user is shot using the camera function of mobile terminal, initially enter
Emotion shoots.This shooting instruct, include take pictures and record a video or video calling instruct.
Step 102, the facial emotions of detection mobile phone users.
In this step, before being shot, enter preview interface, the facial emotions of detection user, with the knot according to detection
Fruit is judging the emotion of active user.This facial emotions may include that happy emoticon, angry emotion, sad mood, sobbing feelings
Thread, ponder emotion and emotion of sleeping soundly etc..
Step 103, the emotional characteristicss in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When matching, photographic head is controlled to be shot.
In this step, by searching the emotional characteristicss mating with the current facial emotions obtaining in emotional characteristicss storehouse, come
Determine the facial emotions of the user arriving of current detection.After determining the facial emotions of active user, photographic head is controlled to carry out
Shoot.Optionally, if coupling is unsuccessful, system prompts user does not exist or mates unsuccessful, and this enforcement sets coupling time
Number is three times, if continuous coupling three times is unsuccessful, exits the image pickup method based on emotion, carries out normal screening-mode.
Step 104, obtains the first data of the described facial emotions of inclusion that described photographic head shoots.
In this step, by the selection to photographic head shoot function for the receive user, obtain the first number including facial emotions
According to.This first packet information containing facial emotions.When user's selection is camera function, then the first data obtaining is picture number
According to;If what user selected is camera function, the first data obtaining is video data.
Step 105, is extracted the second data mated with described facial emotions prestoring and is carried out with described first data
Synthesis, generates target data.
In this step, after step 104 gets the first data, by extract prestoring with described facial emotions
Second data of coupling is synthesized with this first data, generates target data.This first data is obtained by photographic head
Data, this second data is the mood data of the inclusion facial emotions prestoring.This second data is relative with the first data
Should, if the first data is view data, the second data is also view data;If the first data is video data, this second
Data is also video data or animation data.This target data is the data after synthesizing.
A kind of image pickup method based on emotion of the embodiment of the present invention, is referred to by receiving the shooting of mobile phone users input
Order;The facial emotions of detection mobile phone users;When facial emotions mobile phone users is detected and default emotional characteristicss
When emotional characteristicss in storehouse match, photographic head is controlled to be shot;Obtain the inclusion described face feelings that described photographic head shoots
First data of thread;Extract the second data mated with described facial emotions prestoring to be closed with described first data
Become, generate target data, the photo that shoot the emotional content of extraction and photographic head or video data are organically combined, from
And generate interesting photo or video, increased the interest of camera function, and effectively reduce user and worked as according to oneself
Front emotion selects the loaded down with trivial details of exposal model.
Second embodiment
A kind of image pickup method based on emotion of the present invention, as shown in Figure 2, comprising:
Step 201, obtains at least one face-image of described photographic head collection.
In this step, by, before shooting using emotion, obtaining at least one face-image of photographic head collection, in advance
Typing represents the facial expression of various emotions, is to carry out judging when emotion shoots that the facial emotions of user provide foundation.This face
Portion's emotion at least includes happy emoticon, angry emotion, sad mood, emotion of cryying, ponders emotion and the one kind of emotion of sleeping soundly.
In acquisition, system ejects prompting frame and supplies user to select corresponding happy emoticon, angry emotion, sad mood, sobbing emotion, sink
Think emotion and at least one facial emotions of emotion of sleeping soundly carry out the corresponding face-image of typing.Such as: user is according to system
Prompting, typing in the prompting frame of happy emoticon represents the face-image of happy emoticon, typing in the dialog box of angry emotion
Represent the face-image of angry emotion.
Step 202, extracts the emotional characteristicss at least one face-image described.
In this step, according to the face-image obtaining in step 201, extract the emotional characteristicss of this face-image.This extraction
Emotional characteristicss, including the eye contour feature under this emotion, eyebrow feature and lip feature.When extracting, will be every kind of
The emotional characteristicss of emotion are compared with thresholding set in advance, if when extracting these emotional characteristicss, find to reach not
To these thresholdings, system prompt re-types, and shows the reason failure, is lifted at when judging facial emotions with this
Accuracy.Such as, when extracting corresponding emotional characteristicss in the face-image representing happy emoticon, find the area that face opens
Size does not reach default area threshold value, and system prompt re-types the face-image of happy emoticon.
Step 203, for each of at least one face-image described face-image, set up described face-image with
Mapping relations between described emotional characteristicss.
In this step, according to the emotional characteristicss of face-image and the face-image extracting, set up their mapping relations.I.e.
The eye contour feature of one face-image and this face-image, eyebrow feature and lip feature correspond.When using,
System determines the emotion of face-image by searching eye contour feature, eyebrow feature and lip feature.
Step 204, based at least one face-image described, described emotional characteristicss and described mapping relations, sets up emotion
Feature database.
In this step, this emotional characteristicss storehouse includes at least one face-image, the emotional characteristicss of face-image, and they
Three key elements of mapping relations.When using, system is passed through to search the emotional characteristicss of coupling in emotional characteristicss storehouse, then passes through
This mapping relations determines the facial emotions of this face-image.
Step 205, receives the shooting instruction of mobile phone users input.
In this step, when receiving the instruction that active user is shot using the camera function of mobile terminal, initially enter
Emotion shoots.
Step 206, the facial emotions of detection mobile phone users.
In this step, before being shot, enter preview interface, the facial emotions of detection user, according to the result of detection
To judge the emotion of active user.This emotion may include that happy emoticon, angry emotion, sad mood, sobbing emotion, ponders
Emotion and emotion of sleeping soundly etc..
Step 207, the emotional characteristicss in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When matching, photographic head is controlled to be shot.
In this step, by searching the emotional characteristicss mating with the current facial emotions obtaining in emotional characteristicss storehouse, come
Determine the facial emotions of the user arriving of current detection.After determining the facial emotions of active user, photographic head is controlled to carry out
Shoot.Optionally, if coupling is unsuccessful, system prompts user does not exist or mates unsuccessful, and this enforcement sets coupling time
Number is three times, if continuous coupling three times is unsuccessful, exits the image pickup method based on emotion, carries out normal screening-mode.
Step 208, obtains the first data of the described facial emotions of inclusion that described photographic head shoots.
In this step, by the selection to photographic head shoot function for the receive user, obtain the first number including facial emotions
According to.This first packet information containing facial emotions.It is camera function when receive user's selection, then the first data obtaining is
View data;It is camera function when receive user's selection, then the first data obtaining is video data.
Step 209, is extracted the second data mated with described facial emotions prestoring and is carried out with described first data
Synthesis, generates target data.
In this step, after step 208 gets the first data, by extract prestoring with described facial emotions
Second data of coupling is synthesized with this first data, generates target data.This second data is corresponding with the first data, if
First data is view data, then the second data is also view data;If the first data is video data, this second data
For video data or animation data.This target data is the data after synthesizing.
Optionally, when shooting instruction for photographing instruction, as shown in Figure 2 a:
Step 208 includes:
Step 2081.1, when shooting instruction for photographing instruction, obtains the face to mobile phone users for the described photographic head
First image of the captured described facial emotions of inclusion;
Wherein, described first image is described first data.
In this step, when user uses photographic head execution photographing operation, the data getting is view data.This image
Data includes the facial emotions information of user.
Step 209 includes:
Step 2091.1, extracts the second figure mating with the described facial emotions in described first image prestoring
Picture.
In this step, according to the facial emotions of the first acquired image, extract the second figure mating with this facial emotions
Picture.Such as, when the facial emotions of the first image getting are happy emoticon, then embody glad in corresponding extraction the second image
The image content of emotion.
Step 2091.2, described first image and described second image are carried out image synthesis, generate described target data;
Wherein, described second image is described second data.
In this step, this second image is the second data.By the first image and described second image are carried out image conjunction
Become, generate final target image.Specifically, the first image can be set to currently displaying, the second image is set to background
Display.Such as: during happy emoticon, the first glad image of currently displaying expression, watermark background shows smiling face's picture, idol picture
Deng the second image;During angry emotion, the first image of currently displaying expression indignation, the bird of watermark background display indignation, indignation
Second image such as expression;During emotion of sleeping soundly, the first image that currently displaying expression is slept soundly, watermark background shows the baby that sleeps soundly,
Pigletss etc. second image.
Optionally, when shooting instruction for video recording or video calling instruction, as shown in Figure 2 b:
Step 208 includes:
Step 2082.1, when described shooting instruction is for video recording or video calling instruction, acquisition photographic head is to described movement
The video data of the described facial emotions of inclusion captured by the face of terminal use.
In this step, when shooting operation user's selection is detected is video recording or video calling, the data getting is
Video data.This video data includes the facial emotions information of user.
Step 2082.2, described video data is separated into m two field picture, stores to the first memory element;
Wherein, this m two field picture is the first data.
In this step, by the video data getting is separated into view data one by one, and store, just
In being synthesized.
Step 209 includes:
Step 2092.1, extracts the animation number mating with the described facial emotions in described video data prestoring
According to.
In this step, according to acquired video data, extract the animation mating with the facial emotions in this video data
Data.Such as, when the facial emotions of the video data getting are happy emoticon, then embody high in corresponding extraction animation data
The animated content of emerging emotion.
Step 2092.2, described animation data is separated into n two field picture, stores to the second memory element;
Wherein, this n frame image data is the second data.
In this step, by the animation data getting is separated into view data one by one, and store, just
In being synthesized.
Step 2092.3, the every two field picture in described first memory element and described second memory element is corresponded to respectively
Row synthesis, and the every two field picture after synthesis is stored to the 3rd memory element.
In this step, by the video data of acquisition being separated into m frame and storing every two field picture of the first memory element
It is separated into n frame and stores every two field picture of the second memory element with the animation data extracting, be overlapped synthesizing, obtain new
Image, and store the 3rd memory element.
Step 2092.4, closes audio input source.
In this step, by closing audio input source, the audio frequency of video data and animation data is closed, be changed to use
The voice data that family selects.
Step 2092.5, extracts the voice data mating with described facial emotions prestoring single with described 3rd storage
View data in unit is synthesized, and generates described target data;
Wherein, described n two field picture is described second data.
In this step, extract and the emotion voice data of emotion Corresponding matching that embodies in voice data, and by this emotion
Audio frequency is synthesized with the view data in the 3rd memory element, generates final video data.
Step 210, receive mobile phone users input shares instruction.
In this step, after generating target data, user can select to be shared this target data.Used by receiving
Family share instruction, this target data is shared corresponding platform.
Step 211, shows that at least one shares path.
In this step, by after receiving the sharing of user, what system was displayed for user's selection shares path.Than
As: this path shows can be for sharing qq, wechat, the platform such as microblogging or be destined to other people.
Step 212, detection mobile phone users to described at least one share the selection operation in path.
In this step, by detecting the selection of user, determine the path that user's selection is shared.
Step 213, when selection operation is detected, shares described target based on the corresponding path of sharing of described selection operation
Data.
In this step, share path by what receive user selected, target data is shared the path of this selection determination.
A kind of image pickup method based on emotion of the embodiment of the present invention, represents the face figure of various emotions by typing in advance
As come to judge active user the photographic head using mobile terminal carry out emotion shoot when facial emotions, thus according to determine
Facial emotions, the further extraction emotional content consistent with the facial emotions of this determination, use photographic head to shoot with user
Data content is synthesized, and generates target data, and the data content that the emotional content of extraction and photographic head are shot organically is tied
Close, thus generating interesting photo or video, with the addition of interesting element to photo or video, increased the entertaining of camera function
Property;And effectively reduce user in the troublesome operation selecting exposal model according to oneself current emotion.Further, Yong Huke
With by being shared the interesting photo generating or video, with other people shared enjoyment.
3rd embodiment
With reference to Fig. 3, it is a kind of structure chart of mobile terminal of the embodiment of the present invention, enables embodiment one to embodiment two
In the image pickup method based on emotion details, and reach identical effect.This mobile terminal 300 includes:
First receiver module 301, for receiving the shooting instruction of mobile phone users input.
First detection module 302, for detecting the facial emotions of mobile phone users.
Taking module 303, for when in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When emotional characteristicss match, photographic head is controlled to be shot.
First acquisition module 304, for obtaining the first data of the described facial emotions of inclusion that described photographic head shoots.
Synthesis module 305, for extracting the second data mated with described facial emotions and described first prestoring
Data is synthesized, and generates target data.
A kind of mobile terminal of the embodiment of the present invention, by above-mentioned module, the shooting receiving mobile phone users input refers to
Order;The facial emotions of detection mobile phone users;When facial emotions mobile phone users is detected and default emotional characteristicss
When emotional characteristicss in storehouse match, photographic head is controlled to be shot;Obtain the inclusion described face feelings that described photographic head shoots
First data of thread;Extract the second data mated with described facial emotions prestoring to be closed with described first data
Become, generate target data, the first data that the second data extracted and photographic head are shot organically combines, thus generating interesting
Photo or video, increased the interest of camera function, and effectively reduce user according to oneself current emotion choosing
Select the troublesome operation of exposal model.
With reference to Fig. 4, it is a kind of structure chart of mobile terminal of the embodiment of the present invention, enables embodiment one to embodiment three
In details, and reach identical effect.This mobile terminal 400 includes:
Second acquisition module 401, for obtaining at least one face-image of described photographic head collection.
Extraction module 402, for extracting the emotional characteristicss at least one face-image described.
First sets up module 403, for for each of at least one face-image described face-image, setting up institute
State the mapping relations between face-image and described emotional characteristicss.
Second sets up module 404, for being closed based at least one face-image described, described emotional characteristicss and described mapping
System, sets up emotional characteristicss storehouse.
First receiver module 405, for receiving the shooting instruction of mobile phone users input.
First detection module 406, for detecting the facial emotions of mobile phone users.
Taking module 407, for when in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When emotional characteristicss match, photographic head is controlled to be shot.
First acquisition module 408, for obtaining the first data of the described facial emotions of inclusion that described photographic head shoots.
Synthesis module 409, for extracting the second data mated with described facial emotions and described first prestoring
Data is synthesized, and generates target data.
Optionally, when shooting instruction for photographing instruction, as shown in fig. 4 a:
Described first acquisition module 408, comprising:
First acquisition unit 4081.1, for obtaining described photographic head to the inclusion captured by the face of mobile phone users
First image of described facial emotions;
Wherein, described first image is described first data.
Described synthesis module 409 includes:
First extraction unit 4091.1, for extract prestore with described first image in described facial emotions
The second image joined;
First synthesis unit 4091.2, for described first image and described second image are carried out image synthesis, generates
Described target data;
Wherein, described second image is described second data.
Optionally, when shooting instruction for video recording or video calling instruction, as shown in Figure 4 b:
Described first acquisition module 408, comprising:
Second acquisition unit 4082.1, for obtaining photographic head to the inclusion captured by the face of described mobile phone users
The video data of described facial emotions;
First separative element 4082.2, for described video data is separated into m two field picture, stores single to the first storage
Unit;
Wherein, described m two field picture is described first data.
Described synthesis module 409 includes:
Second extraction unit 4092.1, for extract prestore with described video data in described facial emotions
The animation data joined;
Second separative element 4092.2, for described animation data is separated into n two field picture, stores single to the second storage
Unit;
Second synthesis unit 4092.3, for by the every frame figure in described first memory element and described second memory element
As correspondence is synthesized respectively, and the every two field picture after synthesis is stored to the 3rd memory element;
Switch element 4092.4, for closing audio input source;
3rd synthesis unit 4092.5, for extracting the voice data mating with described facial emotions prestoring and institute
The view data stated in the 3rd memory element is synthesized, and generates described target data;
Wherein, described n two field picture is described second data.
Described mobile terminal 400 also includes:
Second receiver module 410, for receive mobile phone users input share instruction.
Display module 411, for showing that at least one shares path.
Second detection module 412, for detect mobile phone users to described at least one share the selection operation in path.
Sharing module 413, for when selection operation is detected, being shared based on the corresponding path of sharing of described selection operation
Described target data.
A kind of mobile terminal of the embodiment of the present invention, by above-mentioned module, typing in advance represents the face figure of various emotions
As come to judge active user the photographic head using mobile terminal carry out emotion shoot when facial emotions, thus according to determine
Facial emotions, the further extraction emotional content consistent with the facial emotions of this determination, use photographic head to shoot with user
Data content is synthesized, and generates target data, and the data content that the emotional content of extraction and photographic head are shot organically is tied
Close, thus generating interesting photo or video, with the addition of interesting element to photo or video, increased the entertaining of camera function
Property;And effectively reduce user in the troublesome operation selecting exposal model according to oneself current emotion.Further, Yong Huke
With by being shared the interesting photo generating or video, with other people shared enjoyment.
5th embodiment
With reference to Fig. 5, it is a kind of structure chart of mobile terminal of the embodiment of the present invention, this mobile terminal 500 includes: at least one
Individual processor 501, memorizer 502, at least one network interface 504 and user interface 503.Each group in mobile terminal 500
Part is coupled by bus system 505.It is understood that bus system 505 is used for realizing the connection communication between these assemblies.
Bus system 505, in addition to including data/address bus, also includes power bus, controlling bus and status signal bus in addition.But in order to
For the sake of clear explanation, in Figure 5 various buses are all designated as bus system 505.
Wherein, user interface 503 can include display, keyboard or pointing device (for example, mouse, trace ball
(trackball), touch-sensitive plate or touch screen etc..
It is appreciated that the memorizer 502 in the embodiment of the present invention can be volatile memory or nonvolatile memory,
Or may include volatibility and nonvolatile memory.Wherein, nonvolatile memory can be read only memory (read-
Onlymemory, rom), programmable read only memory (programmablerom, prom), Erasable Programmable Read Only Memory EPROM
(erasableprom, eprom), Electrically Erasable Read Only Memory (electricallyeprom, eeprom) or sudden strain of a muscle
Deposit.Volatile memory can be random access memory (randomaccessmemory, ram), and it is used as outside slow at a high speed
Deposit.By exemplary but be not restricted explanation, the ram of many forms can use, such as static RAM
(staticram, sram), dynamic random access memory (dynamicram, dram), Synchronous Dynamic Random Access Memory
(synchronousdram, sdram), double data speed synchronous dynamic RAM (doubledatarate
Sdram, ddrsdram), enhancement mode Synchronous Dynamic Random Access Memory (enhanced sdram, esdram), synchronized links
Dynamic random access memory (synchlinkdram, sldram) and direct rambus random access memory
(directrambusram, drram).The memorizer 502 of the system and method for embodiment of the present invention description is intended to including but does not limit
In these memorizeies with any other suitable type.
In some embodiments, memorizer 502 stores following element, executable module or data structure, or
Their subset of person, or their superset: operating system 5021 and application program 5022.
Wherein, operating system 5021, comprise various system programs, such as ccf layer, core library layer, driving layer etc., are used for
Realize various basic businesses and process hardware based task.Application program 5022, comprises various application programs, such as media
Player (mediaplayer), browser (browser) etc., are used for realizing various applied business.Realize embodiment of the present invention side
The program of method may be embodied in application program 5022.
In embodiments of the present invention, can be by calling program or the instruction of memorizer 502 storage, specifically, application
The program of storage or instruction in program 5022, processor 501 is used for: receives the shooting instruction of mobile phone users input;Detection
The facial emotions of mobile phone users;Feelings in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When thread feature matches, photographic head is controlled to be shot;Obtain the first of the described facial emotions of inclusion that described photographic head shoots
Data;Extract the second data mated with described facial emotions prestoring to be synthesized with described first data, generate mesh
Mark data.
The method that the embodiments of the present invention disclose can apply in processor 501, or is realized by processor 501.
Processor 501 is probably a kind of IC chip, has the disposal ability of signal.Realize during, said method each
Step can be completed by the instruction of the integrated logic circuit of the hardware in processor 501 or software form.Above-mentioned process
Device 501 can be general processor, digital signal processor (digitalsignalprocessor, dsp), special IC
(applicationspecific integratedcircuit, asic), ready-made programmable gate array
(fieldprogrammablegatearray, fpga) or other PLDs, discrete gate or transistor logic
Device, discrete hardware components.Can realize or execute disclosed each method in the embodiment of the present invention, step and box
Figure.General processor can be microprocessor or this processor can also be any conventional processor etc..In conjunction with the present invention
The step of the method disclosed in embodiment can be embodied directly in hardware decoding processor execution and complete, or uses decoding processor
In hardware and software module combination execution complete.Software module may be located at random access memory, and flash memory, read only memory can
In the ripe storage medium in this area such as program read-only memory or electrically erasable programmable memory, depositor.This storage
Medium is located at memorizer 502, and processor 501 reads the information in memorizer 502, completes the step of said method in conjunction with its hardware
Suddenly.
It is understood that the embodiment of the present invention description these embodiments can with hardware, software, firmware, middleware,
Microcode or a combination thereof are realizing.Hardware is realized, processing unit can be implemented in one or more special ICs
(applicationspecificintegratedcircuits, asic), digital signal processor
(digitalsignalprocessing, dsp), digital signal processing appts (dspdevice, dspd), programmable logic device
(programmablelogicdevice, pld), field programmable gate array (field-programmablegatearray,
Fpga), general processor, controller, microcontroller, microprocessor, the other electronics lists for executing herein described function
In unit or a combination thereof.
Software is realized, can be come by executing the module (such as process, function etc.) of function described in the embodiment of the present invention
Realize the technology described in the embodiment of the present invention.Software code is storable in memorizer and passes through computing device.Memorizer can
To realize within a processor or outside processor.
When described shooting instruction is for photographing instruction:
Alternatively, processor 501 is used for: obtains described photographic head to the inclusion institute captured by the face of mobile phone users
State the first image of facial emotions;Wherein, described first image is described first data.
Alternatively, processor 501 is used for: what extraction prestored is mated with the described facial emotions in described first image
The second image;Described first image and described second image are carried out image synthesis, generates described target data;Wherein, institute
Stating the second image is described second data.
When described shooting instruction is for video recording or video calling instruction:
Alternatively, processor 501 is used for: obtains photographic head to the inclusion institute captured by the face of described mobile phone users
State the video data of facial emotions;Described video data is separated into m two field picture, stores to the first memory element;Wherein, described
M two field picture is described first data.
Alternatively, processor 501 is used for: what extraction prestored is mated with the described facial emotions in described video data
Animation data;Described animation data is separated into n two field picture, stores to the second memory element;By described first memory element
Correspondence is synthesized respectively with the every two field picture in described second memory element, and the every two field picture after synthesis is stored to the 3rd
Memory element;Close audio input source;Extract the voice data mating with described facial emotions and the described 3rd prestoring
View data in memory element is synthesized, and generates described target data;Wherein, described n two field picture is described second data.
Alternatively, processor 501 is used for: receive mobile phone users input shares instruction;Show that at least one is shared
Path;Detection mobile phone users to described at least one share the selection operation in path;When selection operation is detected, it is based on
Described target data is shared in the corresponding path of sharing of described selection operation.
Alternatively, processor 501 is used for: obtains at least one face-image of described photographic head collection;Described in extraction extremely
Emotional characteristicss in a few face-image;For each of at least one face-image described face-image, set up institute
State the mapping relations between face-image and described emotional characteristicss;Based at least one face-image described, described emotional characteristicss
With described mapping relations, set up emotional characteristicss storehouse.
Mobile terminal 500 is capable of each process that in previous embodiment, mobile terminal is realized, for avoiding repeating, here
Repeat no more.
A kind of mobile terminal provided in an embodiment of the present invention, by above-mentioned module, typing in advance represents the face of various emotions
Portion's image carries out facial emotions when emotion shoots in the photographic head using mobile terminal judging active user, thus according to true
Fixed facial emotions, the further extraction emotional content consistent with the facial emotions of this determination, clapped using photographic head with user
The data content taken the photograph is synthesized, and generates target data, and the data content that the emotional content extracting and photographic head are shot is organic
Combination, thus generating interesting photo or video, with the addition of interesting element to photo or video, increased camera function
Interest, and effectively reduce user in the troublesome operation selecting exposal model according to oneself current emotion.Further, use
Family can be by being shared the interesting photo generating or video, with other people shared enjoyment.
Sixth embodiment
With reference to Fig. 6, it is a kind of structure chart of mobile terminal of the embodiment of the present invention, specifically, mobile terminal in Fig. 6
600 can be mobile phone, panel computer, personal digital assistant (personaldigital assistant, pda) or vehicle-mounted computer
Deng.
Mobile terminal 600 in Fig. 6 includes radio frequency (radiofrequency, rf) circuit 610, memorizer 620, input list
Unit 630, display unit 640, processor 660, voicefrequency circuit 670, wifi (wirelessfidelity) module 680 and power supply
690.
Wherein, input block 630 can be used for numeral or the character information of receiving user's input, and produces and mobile terminal
600 user setup and the relevant signal input of function control.Specifically, in the embodiment of the present invention, this input block 630 can
To include contact panel 631.Contact panel 631, also referred to as touch screen, can collect user thereon or neighbouring touch operation
(such as user uses the operation on contact panel 631 of any suitable object such as finger, stylus or adnexa), and according in advance
The formula setting drives corresponding attachment means.Optionally, contact panel 631 may include touch detecting apparatus and touch controller
Two parts.Wherein, touch detecting apparatus detect the touch orientation of user, and detect the signal that touch operation brings, by signal
Send touch controller to;Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate,
Give this processor 660 again, and can the order sent of receiving processor 660 being executed.Furthermore, it is possible to using resistance-type,
The polytypes such as condenser type, infrared ray and surface acoustic wave realize contact panel 631.Except contact panel 631, input block
630 can also include other input equipments 632, and other input equipments 632 can include but is not limited to physical keyboard, function key
One or more of (such as volume control button, switch key etc.), trace ball, mouse, action bars etc..
Wherein, display unit 640 can be used for showing by the information of user input or the information and the movement that are supplied to user
The various menu interfaces of terminal 600.Display unit 840 may include display floater 641, optionally, can adopt lcd or organic
The forms such as optical diode (organiclight-emittingdiode, oled) are configuring display floater 841.
It should be noted that contact panel 631 can cover display floater 841, form touch display screen, when the inspection of this touch display screen
Measure thereon or after neighbouring touch operation, send processor 660 to determine the type of touch event, with preprocessor
660 provide corresponding visual output according to the type of touch event in touch display screen.
Touch display screen includes Application Program Interface viewing area and conventional control viewing area.This Application Program Interface viewing area
And the arrangement mode of this conventional control viewing area does not limit, can for being arranged above and below, left-right situs etc. can distinguish two and show
Show the arrangement mode in area.This Application Program Interface viewing area is displayed for the interface of application program.Each interface is permissible
Comprise the interface elements such as icon and/or the widget desktop control of at least one application program.This Application Program Interface viewing area
It can also be the empty interface not comprising any content.This conventional control viewing area is used for showing the higher control of utilization rate, for example,
Application icons such as settings button, interface numbering, scroll bar, phone directory icon etc..
Wherein processor 660 is the control centre of mobile terminal 600, using various interfaces and connection whole mobile phone
Various pieces, by running or executing software program and/or the module being stored in first memory 621, and call storage
Data in second memory 622, the various functions of execution mobile terminal 600 and processing data, thus to mobile terminal 600
Carry out integral monitoring.Optionally, processor 660 may include one or more processing units.
In embodiments of the present invention, by call store this first memory 621 in software program and/or module and/
Or the data in this second memory 622, processor 660 is for receiving the shooting instruction of mobile phone users input;Detection moves
The facial emotions of dynamic terminal use;Emotion in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When feature matches, photographic head is controlled to be shot;Obtain the first number of the described facial emotions of inclusion that described photographic head shoots
According to;Extract the second data mated with described facial emotions prestoring to be synthesized with described first data, generate target
Data.
Alternatively, when described shooting instruction is for photographing instruction:
Alternatively, processor 660 is used for: obtains described photographic head to the inclusion institute captured by the face of mobile phone users
State the first image of facial emotions;Wherein, described first image is described first data.
Alternatively, processor 660 is used for: what extraction prestored is mated with the described facial emotions in described first image
The second image;Described first image and described second image are carried out image synthesis, generates described target data;Wherein, institute
Stating the second image is described second data.
When described shooting instruction is for video recording or video calling instruction:
Alternatively, processor 660 is used for: obtains photographic head to the inclusion institute captured by the face of described mobile phone users
State the video data of facial emotions;Described video data is separated into m two field picture, stores to the first memory element;Wherein, described
M two field picture is described first data.
Alternatively, processor 660 is used for: what extraction prestored is mated with the described facial emotions in described video data
Animation data;Described animation data is separated into n two field picture, stores to the second memory element;By described first memory element
Correspondence is synthesized respectively with the every two field picture in described second memory element, and the every two field picture after synthesis is stored to the 3rd
Memory element;Close audio input source;Extract the voice data mating with described facial emotions and the described 3rd prestoring
View data in memory element is synthesized, and generates described target data;Wherein, described n two field picture is described second data.
Alternatively, processor 660 is used for: receive mobile phone users input shares instruction;Show that at least one is shared
Path;Detection mobile phone users to described at least one share the selection operation in path;When selection operation is detected, it is based on
Described target data is shared in the corresponding path of sharing of described selection operation.
It can be seen that, mobile terminal 600 is capable of each process that in previous embodiment, mobile terminal is realized, for avoiding weight
Multiple, repeat no more here.
Those of ordinary skill in the art are it is to be appreciated that combine each of the embodiment description disclosed in the embodiment of the present invention
The unit of example and algorithm steps, being capable of being implemented in combination in electronic hardware or computer software and electronic hardware.These
Function to be executed with hardware or software mode actually, the application-specific depending on technical scheme and design constraint.Specialty
Technical staff can use different methods to each specific application realize described function, but this realization should
Think beyond the scope of this invention.
Those skilled in the art can be understood that, for convenience and simplicity of description, the system of foregoing description,
Device and the specific work process of unit, may be referred to the corresponding process in preceding method embodiment, will not be described here.
It should be understood that disclosed apparatus and method in embodiment provided herein, can pass through other
Mode is realized.For example, device embodiment described above is only schematically, for example, the division of described unit, it is only
A kind of division of logic function, actual can have other dividing mode when realizing, for example multiple units or assembly can in conjunction with or
Person is desirably integrated into another system, or some features can be ignored, or does not execute.Another, shown or discussed is mutual
Between coupling or direct-coupling or communication connection can be by some interfaces, the INDIRECT COUPLING of device or unit or communication link
Connect, can be electrical, mechanical or other forms.
The described unit illustrating as separating component can be or may not be physically separate, show as unit
The part showing can be or may not be physical location, you can with positioned at a place, or can also be distributed to multiple
On NE.The mesh to realize this embodiment scheme for some or all of unit therein can be selected according to the actual needs
's.
In addition, can be integrated in a processing unit in each functional unit in each embodiment of the present invention it is also possible to
It is that unit is individually physically present it is also possible to two or more units are integrated in a unit.
If described function realized using in the form of SFU software functional unit and as independent production marketing or use when, permissible
It is stored in a computer read/write memory medium.Based on such understanding, technical scheme is substantially in other words
Partly being embodied in the form of software product of part that prior art is contributed or this technical scheme, this meter
Calculation machine software product is stored in a storage medium, including some instructions with so that a computer equipment (can be individual
People's computer, server, or network equipment etc.) execution each embodiment methods described of the present invention all or part of step.
And aforesaid storage medium includes: u disk, portable hard drive, rom, ram, magnetic disc or CD etc. are various can be with store program codes
Medium.
A kind of mobile terminal provided in an embodiment of the present invention, by above-mentioned module, typing in advance represents the face of various emotions
Portion's image carries out facial emotions when emotion shoots in the photographic head using mobile terminal judging active user, thus according to true
Fixed facial emotions, the further extraction emotional content consistent with the facial emotions of this determination, clapped using photographic head with user
The data content taken the photograph is synthesized, and generates target data, and the data content that the emotional content extracting and photographic head are shot is organic
Combination, thus generating interesting photo or video, with the addition of interesting element to photo or video, increased camera function
Interest, and effectively reduce user in the troublesome operation selecting exposal model according to oneself current emotion.Further, use
Family can be by being shared the interesting photo generating or video, with other people shared enjoyment.
The above, the only specific embodiment of the present invention, but protection scope of the present invention is not limited thereto, and any
Those familiar with the art the invention discloses technical scope in, change or replacement can be readily occurred in, all should contain
Cover within protection scope of the present invention.Therefore, protection scope of the present invention should be defined by scope of the claims.
Claims (14)
1. a kind of image pickup method based on emotion, the mobile terminal being applied to have photographic head is it is characterised in that methods described bag
Include:
Receive the shooting instruction of mobile phone users input;
The facial emotions of detection mobile phone users;
When the emotional characteristicss in facial emotions mobile phone users is detected and default emotional characteristicss storehouse match, control
Photographic head is shot;
Obtain the first data of the described facial emotions of inclusion that described photographic head shoots;
Extract the second data mated with described facial emotions prestoring to be synthesized with described first data, generate target
Data.
2. method according to claim 1 it is characterised in that when described shoot instruction for photographing instruction when, described acquisition
The step of the first data of the described facial emotions of inclusion that described photographic head shoots, comprising:
Obtain the first image to the described facial emotions of inclusion captured by the face of mobile phone users for the described photographic head;
Wherein, described first image is described first data.
3. method according to claim 1 is it is characterised in that mating with described facial emotions of prestoring of described extraction
The second data synthesized with described first data, generate target data step, comprising:
Extract the second image mating with the described facial emotions in described first image prestoring;
Described first image and described second image are carried out image synthesis, generates described target data;
Wherein, described second image is described second data.
4. method according to claim 1 is it is characterised in that shoot instruction for video recording or video calling instruction when described
When, the step of the first data of the described facial emotions of inclusion that the described photographic head of described acquisition shoots, comprising:
Obtain the video data to the described facial emotions of inclusion captured by the face of described mobile phone users for the photographic head;
Described video data is separated into m two field picture, stores to the first memory element;
Wherein, described m two field picture is described first data.
5. method according to claim 1 is it is characterised in that mating with described facial emotions of prestoring of described extraction
The second data synthesized with described first data, generate target data step, comprising:
Extract the animation data mating with the described facial emotions in described video data prestoring;
Described animation data is separated into n two field picture, stores to the second memory element;
By the every two field picture in described first memory element and described second memory element, correspondence is synthesized respectively, and will synthesize
Every two field picture afterwards stores to the 3rd memory element;
Close audio input source;
Extract the view data in the voice data mating with described facial emotions prestoring and described 3rd memory element
Synthesized, generated described target data;
Wherein, described n two field picture is described second data.
6. method according to claim 1 is it is characterised in that mating with described facial emotions of prestoring of described extraction
The second data synthesized with described first data, generate target data step after, methods described also includes:
Receive mobile phone users input shares instruction;
Show that at least one shares path;
Detection mobile phone users to described at least one share the selection operation in path;
When selection operation is detected, described target data is shared based on the corresponding path of sharing of described selection operation.
7. method according to claim 1 it is characterised in that described reception mobile phone users input shoot instruction
Before step, methods described also includes:
Obtain at least one face-image of described photographic head collection;
Extract the emotional characteristicss at least one face-image described;
For each of at least one face-image described face-image, set up described face-image and described emotional characteristicss
Between mapping relations;
Based at least one face-image described, described emotional characteristicss and described mapping relations, set up emotional characteristicss storehouse.
8. a kind of mobile terminal is it is characterised in that include:
First receiver module, for receiving the shooting instruction of mobile phone users input;
First detection module, for detecting the facial emotions of mobile phone users;
Taking module, for when the emotional characteristicss in facial emotions mobile phone users is detected and default emotional characteristicss storehouse
When matching, photographic head is controlled to be shot;
First acquisition module, for obtaining the first data of the described facial emotions of inclusion that described photographic head shoots;
Synthesis module, is carried out with described first data for extracting the second data mated with described facial emotions prestoring
Synthesis, generates target data.
9. mobile terminal according to claim 8 it is characterised in that when described shoot instruction for photographing instruction when, described
First acquisition module includes:
First acquisition unit, for obtaining described photographic head face described to the inclusion captured by the face of mobile phone users feelings
First image of thread;
Wherein, described first image is described first data.
10. mobile terminal according to claim 8 is it is characterised in that described synthesis module includes:
First extraction unit, for extracting the second figure mating with the described facial emotions in described first image prestoring
Picture;
First synthesis unit, for described first image and described second image are carried out image synthesis, generates described number of targets
According to;
Wherein, described second image is described second data.
11. mobile terminals according to claim 8 are it is characterised in that shoot instruction for video recording or video calling when described
During instruction, described first acquisition module also includes:
Second acquisition unit, for obtaining photographic head face described to the inclusion captured by the face of described mobile phone users feelings
The video data of thread;
First separative element, for described video data is separated into m two field picture, stores to the first memory element;
Wherein, described m two field picture is described first data.
12. mobile terminals according to claim 8 are it is characterised in that described synthesis module includes:
Second extraction unit, for extracting the animation number mating with the described facial emotions in described video data prestoring
According to;
Second separative element, for described animation data is separated into n two field picture, stores to the second memory element;
Second synthesis unit, for corresponding to the every two field picture in described first memory element and described second memory element respectively
Synthesized, and the every two field picture after synthesis is stored to the 3rd memory element;
Switch element, for closing audio input source;
3rd synthesis unit, for extracting the voice data mating with described facial emotions prestoring and described 3rd storage
View data in unit is synthesized, and generates described target data;
Wherein, described n two field picture is described second data.
13. mobile terminals according to claim 8 are it is characterised in that also include:
Second receiver module, for receive mobile phone users input share instruction;
Display module, for showing that at least one shares path;
Second detection module, for detect mobile phone users to described at least one share the selection operation in path;
Sharing module, for when selection operation is detected, sharing described mesh based on the corresponding path of sharing of described selection operation
Mark data.
14. mobile terminals according to claim 8 are it is characterised in that also include:
Second acquisition module, for obtaining at least one face-image of described photographic head collection;
Extraction module, for extracting the emotional characteristicss at least one face-image described;
First sets up module, for for each of at least one face-image described face-image, setting up described face
Mapping relations between image and described emotional characteristicss;
Second sets up module, for based at least one face-image described, described emotional characteristicss and described mapping relations, setting up
Emotional characteristicss storehouse.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610967119.5A CN106341608A (en) | 2016-10-28 | 2016-10-28 | Emotion based shooting method and mobile terminal |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610967119.5A CN106341608A (en) | 2016-10-28 | 2016-10-28 | Emotion based shooting method and mobile terminal |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106341608A true CN106341608A (en) | 2017-01-18 |
Family
ID=57841039
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610967119.5A Pending CN106341608A (en) | 2016-10-28 | 2016-10-28 | Emotion based shooting method and mobile terminal |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106341608A (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107320114A (en) * | 2017-06-29 | 2017-11-07 | 京东方科技集团股份有限公司 | Shooting processing method, system and its equipment detected based on brain wave |
CN107509021A (en) * | 2017-07-18 | 2017-12-22 | 咪咕文化科技有限公司 | Shooting method, shooting device and storage medium |
CN107992824A (en) * | 2017-11-30 | 2018-05-04 | 努比亚技术有限公司 | Take pictures processing method, mobile terminal and computer-readable recording medium |
CN108200373A (en) * | 2017-12-29 | 2018-06-22 | 珠海市君天电子科技有限公司 | Image processing method, device, electronic equipment and medium |
CN109525791A (en) * | 2018-09-21 | 2019-03-26 | 华为技术有限公司 | Information recording method and terminal |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN109766771A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | It can operation object control method, device, computer equipment and storage medium |
CN110121026A (en) * | 2019-04-24 | 2019-08-13 | 深圳传音控股股份有限公司 | Intelligent capture apparatus and its scene generating method based on living things feature recognition |
CN110675674A (en) * | 2019-10-11 | 2020-01-10 | 广州千睿信息科技有限公司 | Online education method and online education platform based on big data analysis |
WO2023060720A1 (en) * | 2021-10-11 | 2023-04-20 | 北京工业大学 | Emotional state display method, apparatus and system |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101369307A (en) * | 2007-08-14 | 2009-02-18 | 索尼株式会社 | Iamge forming device, method and computer program |
US20160180572A1 (en) * | 2014-12-22 | 2016-06-23 | Casio Computer Co., Ltd. | Image creation apparatus, image creation method, and computer-readable storage medium |
CN105721752A (en) * | 2009-07-03 | 2016-06-29 | 奥林巴斯株式会社 | Digital Camera And Camera Shooting Method |
CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN105872338A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device |
-
2016
- 2016-10-28 CN CN201610967119.5A patent/CN106341608A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101369307A (en) * | 2007-08-14 | 2009-02-18 | 索尼株式会社 | Iamge forming device, method and computer program |
CN105721752A (en) * | 2009-07-03 | 2016-06-29 | 奥林巴斯株式会社 | Digital Camera And Camera Shooting Method |
US20160180572A1 (en) * | 2014-12-22 | 2016-06-23 | Casio Computer Co., Ltd. | Image creation apparatus, image creation method, and computer-readable storage medium |
CN105721765A (en) * | 2014-12-22 | 2016-06-29 | 卡西欧计算机株式会社 | IMAGE Generation device and image generation method |
CN105791692A (en) * | 2016-03-14 | 2016-07-20 | 腾讯科技(深圳)有限公司 | Information processing method and terminal |
CN105872338A (en) * | 2016-05-31 | 2016-08-17 | 宇龙计算机通信科技(深圳)有限公司 | Photographing method and device |
Cited By (13)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN107320114A (en) * | 2017-06-29 | 2017-11-07 | 京东方科技集团股份有限公司 | Shooting processing method, system and its equipment detected based on brain wave |
US11806145B2 (en) | 2017-06-29 | 2023-11-07 | Boe Technology Group Co., Ltd. | Photographing processing method based on brain wave detection and wearable device |
CN107509021B (en) * | 2017-07-18 | 2020-08-07 | 咪咕文化科技有限公司 | Shooting method, shooting device and storage medium |
CN107509021A (en) * | 2017-07-18 | 2017-12-22 | 咪咕文化科技有限公司 | Shooting method, shooting device and storage medium |
CN107992824A (en) * | 2017-11-30 | 2018-05-04 | 努比亚技术有限公司 | Take pictures processing method, mobile terminal and computer-readable recording medium |
CN108200373A (en) * | 2017-12-29 | 2018-06-22 | 珠海市君天电子科技有限公司 | Image processing method, device, electronic equipment and medium |
CN108200373B (en) * | 2017-12-29 | 2021-03-26 | 北京乐蜜科技有限责任公司 | Image processing method, image processing apparatus, electronic device, and medium |
CN109525791A (en) * | 2018-09-21 | 2019-03-26 | 华为技术有限公司 | Information recording method and terminal |
CN109766771A (en) * | 2018-12-18 | 2019-05-17 | 深圳壹账通智能科技有限公司 | It can operation object control method, device, computer equipment and storage medium |
CN109684978A (en) * | 2018-12-18 | 2019-04-26 | 深圳壹账通智能科技有限公司 | Employees'Emotions monitoring method, device, computer equipment and storage medium |
CN110121026A (en) * | 2019-04-24 | 2019-08-13 | 深圳传音控股股份有限公司 | Intelligent capture apparatus and its scene generating method based on living things feature recognition |
CN110675674A (en) * | 2019-10-11 | 2020-01-10 | 广州千睿信息科技有限公司 | Online education method and online education platform based on big data analysis |
WO2023060720A1 (en) * | 2021-10-11 | 2023-04-20 | 北京工业大学 | Emotional state display method, apparatus and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106341608A (en) | Emotion based shooting method and mobile terminal | |
US20230230306A1 (en) | Animated emoticon generation method, computer-readable storage medium, and computer device | |
CN107257439B (en) | A kind of image pickup method and mobile terminal | |
CN106780685B (en) | A kind of generation method and terminal of dynamic picture | |
CN106210526A (en) | A kind of image pickup method and mobile terminal | |
CN105933538A (en) | Video finding method for mobile terminal and mobile terminal | |
TWI721466B (en) | Interactive method and device based on augmented reality | |
CN107370887A (en) | A kind of expression generation method and mobile terminal | |
CN106506962A (en) | A kind of image processing method and mobile terminal | |
CN108984707B (en) | Method, device, terminal equipment and storage medium for sharing personal information | |
CN106060386A (en) | Preview image generation method and mobile terminal | |
CN107024990B (en) | A kind of method and mobile terminal attracting children's self-timer | |
CN106101545A (en) | A kind of image processing method and mobile terminal | |
CN106341538A (en) | Lyrics poster push method and mobile terminal | |
CN108495032A (en) | Image processing method, device, storage medium and electronic equipment | |
CN106791438A (en) | A kind of photographic method and mobile terminal | |
CN106791437A (en) | A kind of panoramic picture image pickup method and mobile terminal | |
CN106503658A (en) | automatic photographing method and mobile terminal | |
CN106973237B (en) | A kind of image pickup method and mobile terminal | |
CN107087137A (en) | The method and apparatus and terminal device of video are presented | |
CN102177703A (en) | Method and apparatus for generating a sequence of a plurality of images to be displayed whilst accompanied by audio | |
CN106454086A (en) | Image processing method and mobile terminal | |
CN105933772A (en) | Interaction method, interaction apparatus and interaction system | |
CN106096043A (en) | A kind of photographic method and mobile terminal | |
CN107885823A (en) | Player method, device, storage medium and the electronic equipment of audio-frequency information |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170118 |
|
RJ01 | Rejection of invention patent application after publication |