CN109902606A - A kind of operating method and terminal device - Google Patents
A kind of operating method and terminal device Download PDFInfo
- Publication number
- CN109902606A CN109902606A CN201910129052.1A CN201910129052A CN109902606A CN 109902606 A CN109902606 A CN 109902606A CN 201910129052 A CN201910129052 A CN 201910129052A CN 109902606 A CN109902606 A CN 109902606A
- Authority
- CN
- China
- Prior art keywords
- user
- information
- expressive features
- event
- terminal device
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- User Interface Of Digital Computer (AREA)
Abstract
The present invention provides a kind of operating method and terminal device, solve the problems, such as that the prior art can not carry out personalized behavior anticipation for individual user behavior.The operating method of the embodiment of the present invention includes: to obtain the expressive features information of user's face expression in the default input instruction for receiving user;In the case where the expressive features information matches of the expressive features information and pre-recorded target user's facial expression, performance objective event, the object event is the event with the expressive features information association of target user's facial expression.Corresponding object event can be performed quickly according to the expressive features information of user's face expression in the embodiment of the present invention, realize the purpose for carrying out personalized behavior anticipation for individual user behavior, and go to be manually operated again without user, save the operating time.
Description
Technical field
The present invention relates to the technical field of communications applications more particularly to a kind of operating method and terminal devices.
Background technique
With the continuous development of new technology, the function of intelligent terminal is stronger and stronger in people's hand, and the type of function and
Quantity is also more and more.People when using intelligent terminal need that the more time is spent to go to manually select corresponding function.
And current artificial intelligence technology, it is generally also based only on big data and carries out some simple behavior anticipations, Wu Fazhen
Personalized behavior anticipation is carried out to individual user behavior.
Summary of the invention
The purpose of the present invention is to provide a kind of operating method and terminal devices, can not be for single to solve the prior art
Only user behavior carries out the problem of personalized behavior anticipation.
In order to solve the above-mentioned technical problem, the present invention is implemented as follows:
In a first aspect, being applied to terminal device the embodiment of the invention provides a kind of operating method, comprising:
In the default input instruction for receiving user, the expressive features information of user's face expression is obtained;
In the feelings of the expressive features information and the expressive features information matches of pre-recorded target user's facial expression
Under condition, performance objective event, the object event is the thing with the expressive features information association of target user's facial expression
Part.
Second aspect, the embodiment of the invention also provides a kind of terminal devices, comprising:
Module is obtained, for obtaining the expressive features of user's face expression in the default input instruction for receiving user
Information;
Processing module, for the expressive features in the expressive features information and pre-recorded target user's facial expression
In the case where information matches, performance objective event, the object event is the expressive features with target user's facial expression
The event of information association.
The third aspect the embodiment of the invention also provides a kind of terminal device, including processor, memory and is stored in institute
The computer program that can be run on memory and on the processor is stated, when the computer program is executed by the processor
The step of realizing operating method as described above.
Fourth aspect, it is described computer-readable to deposit the embodiment of the invention also provides a kind of computer readable storage medium
Computer program is stored on storage media, the computer program realizes the step of operating method as described above when being executed by processor
Suddenly.
The embodiment of the present invention has the advantages that
The above-mentioned technical proposal of the embodiment of the present invention obtains user's face in the default input instruction for receiving user
The expressive features information of expression;Believe in the expressive features of the expressive features information and pre-recorded target user's facial expression
It ceases in matched situation, performance objective event, the object event is to believe with the expressive features of target user's facial expression
Cease associated event.Corresponding mesh can be performed quickly according to the expressive features information of user's face expression in the embodiment of the present invention
Mark event realizes the purpose for carrying out personalized behavior anticipation for individual user behavior, and goes to grasp manually again without user
Make, saves the operating time.
Detailed description of the invention
Fig. 1 is the flow diagram of the operating method of the embodiment of the present invention;
Fig. 2 is one of the module diagram of terminal device provided in an embodiment of the present invention;
Fig. 3 is the two of the module diagram of terminal device provided in an embodiment of the present invention;
Fig. 4 is the structural block diagram of terminal device provided in an embodiment of the present invention.
Specific embodiment
Following will be combined with the drawings in the embodiments of the present invention, and technical solution in the embodiment of the present invention carries out clear, complete
Site preparation description, it is clear that described embodiments are some of the embodiments of the present invention, instead of all the embodiments.Based on this hair
Embodiment in bright, every other implementation obtained by those of ordinary skill in the art without making creative efforts
Example, shall fall within the protection scope of the present invention.
As shown in Figure 1, being the flow diagram of operating method provided in an embodiment of the present invention.Below should figure illustrate
The implementation process of this method.
It should be noted that the embodiment of the present invention provides a kind of operating method, it is applied to terminal device, the terminal device
Including camera.
Wherein, which may comprise steps of:
Step 101: for terminal device in the default input instruction for receiving user, the expression for obtaining user's face expression is special
Reference breath.
The default input instruction can be the phonetic order of user's input, be also possible to user to the default behaviour of terminal device
Make, such as clicking operation, slide.
The expressive features information of above-mentioned user's face expression may include position, size and each major facial device of face
The shape of official and location information etc..
In the default input instruction for receiving user, the expressive features letter that camera obtains user's face expression can be opened
Breath.Here, the expressive features information of user's face expression is obtained based on face recognition technology.
In addition, can also obtain the user's before the expressive features information for opening camera acquisition user's face expression
Pertinent authorization information, such as calling permission, the acquisition permission of user's face expression of camera are authorization in pertinent authorization information
In use, opening the expressive features information that camera obtains user's face expression.
Step 102: expression of the terminal device in the expressive features information and pre-recorded target user's facial expression
In the matched situation of characteristic information, performance objective event, the object event is the expression with target user's facial expression
The associated event of characteristic information.
In the embodiment of the present invention, the expressive features information of pre-recorded user's face expression and the incidence relation of event,
The expressive features information of the expressive features information of the user's face expression of acquisition and pre-recorded target user's facial expression
In the case where matching, according to above-mentioned incidence relation, the target thing with the expressive features information association of target user's facial expression is obtained
Part, and execute the object event.
Wherein, the event with the expressive features information association of user's face expression refers in count in advance, preset time
In section, terminal device executes number most event when receiving default input instruction.
Above-mentioned matching refers to that expressive features information is identical or the similarity of expressive features information is greater than default value.
Above-mentioned object event can be to the target object performance objective operation in terminal device, can also be to multimedia
File carries out classification or pushes target message to user.
Preferably, in the expressive features information of the expressive features information and pre-recorded target user's facial expression
In the case where matching, performance objective event is prompted the user whether;When executing instruction of user's input is received, the target thing is executed
Part.
The operating method of the embodiment of the present invention obtains user's face expression in the default input instruction for receiving user
Expressive features information;In the expressive features information of the expressive features information and pre-recorded target user's facial expression
In the case where matching, performance objective event, the object event is to close with the expressive features information of target user's facial expression
The event of connection.Corresponding target thing can be performed quickly according to the expressive features information of user's face expression in the embodiment of the present invention
Part realizes the purpose for carrying out personalized behavior anticipation for individual user behavior, and goes to be manually operated again without user, saves
The operating time is saved.
Further, the expressive features information of the user's face expression includes: the eye feature information of user;
The performance objective event, comprising:
According to the eye feature information, the target area that the object event is needed to be implemented in terminal device and mesh are determined
Mark object;
The target object performance objective is operated in the target area.
At least one of above-mentioned eye feature information includes fixation time, fixation times, in twitching of the eyelid distance and pupil size.
Wherein, above-mentioned eye feature information can be obtained by eye movement technique, above-mentioned twitching of the eyelid distance can specifically refer to from
Angle when previous blinkpunkt carries out sight transfer to next blinkpunkt, between two blinkpunkts.Above-mentioned fixation times are
As unit of some particular studies region, subject person is calculated to the fixation times of this unit area, it reflects subject person pair
The familiarity and interest-degree in the region.Above-mentioned fixation time generally refers to the Cognitive Processing time to a certain region or certain object.
Above-mentioned pupil size can specifically refer to pupil diameter.
For example, user is greater than certain time, and fixation times to certain object in terminal device or the fixation time in certain region
When greater than preset times, determine that the object is target object or the region is target area.
Above-mentioned target object can be the picture shown in terminal device or application icon etc..
In the embodiment of the present invention, in coherent operating process, for example, receiving default input instruction can followed by execute
First operation, the second operation or third operation prejudge user and receive default input in conjunction with the expressive features information of facial expression
The operation that can be executed after instruction, for example, the first operation, and according to eye feature information, it to be operated in terminal device to judge
Object and region, manually selected without user, save the operating time.
Further, in the default input instruction for receiving user, the expressive features information of user's face expression is obtained
Before, further includes:
Within a preset period of time, the expressive features of user's face expression when record terminal device receives default input instruction
Information and the event sets executed after receiving default input;
It establishes in the event sets and executes the expressive features information of number most events and the user's face expression
Incidence relation.
In the embodiment of the present invention, data collection is carried out within a preset period of time, which can be specially one week.Example
Such as, user is when photograph album module checks photo, and above-mentioned default input instruction can be the clicking operation to picture editor's button, one
As, icon cutting, picture editor, picture deletion, picture increase operation or sharing figure can be executed after clicking picture editor's button
The operation of piece.Front camera can record user's face expression when user clicks picture editor's button in above-mentioned preset time period
Expressive features information, the facial expression e.g. smiled, and the successor clicked and executed after picture editor's button is recorded, such as
Share picture, and the number that sharing picture is executed in above-mentioned preset time period is most, then establishes the expression of the facial expression of smile
Characteristic information and the incidence relation for sharing picture.
When entering back into photograph album module when user is subsequent, and a picture being selected to carry out edit operation, if front camera
The expressive features information for detecting user's face expression is the expressive features information of smile facial expression, then carries out sharing figure automatically
The operation of piece.
Optionally, when the default input instruction instructs for the input to multimedia file;
The performance objective event, comprising:
It determines the type of the multimedia file, and classification processing is carried out to multimedia file according to determining type.
Above-mentioned multimedia file includes audio file or text file etc..
In a specific embodiment of the present invention, pre-recorded user's face expression and file fingerprint are associated with
System.For example, obtaining the expressive features of user's face expression when above-mentioned default input instruction is the opening operation to multimedia file
Information is the expressive features information of smile facial expression, and literary with the multimedia of the expressive features information association of smile facial expression
Part type is happy, it is determined that the type of multimedia file is happy.
It here, can be to different music file or text by the identification of the expressive features information to user's face expression
Word file carries out classification processing.
Optionally, the performance objective event, comprising:
Target message is pushed to user.
The target message includes at least one in advertisement information and Multimedia Message.
Here, can pre-recorded user's browse advertisements message or when Multimedia Message user's face expression expressive features letter
Breath, may thereby determine that the message which is user likes, which is the target message of user's dislike, and then can push user
The target message liked.
The operating method of the embodiment of the present invention, after the default input instruction for receiving user, according to user's face expression
Expressive features information determines the object event needed to be implemented, is manually operated, is simplified between user and terminal device without user
Operating procedure saves the operating time.
As shown in Fig. 2, the embodiments of the present invention also provide a kind of terminal devices 200, comprising:
Module 201 is obtained, in the default input instruction for receiving user, the expression for obtaining user's face expression to be special
Reference breath;
Processing module 202, for the expression in the expressive features information and pre-recorded target user's facial expression
In the matched situation of characteristic information, performance objective event, the object event is the expression with target user's facial expression
The associated event of characteristic information.
The expressive features information of the terminal device of the embodiment of the present invention, the user's face expression includes: the eye of user
Characteristic information;
As shown in figure 3, the processing module 202 includes:
Submodule 2021 is determined, for determining and needing to be implemented the mesh in terminal device according to the eye feature information
The target area of mark event and target object;
Implementation sub-module 2022, for being operated in the target area to the target object performance objective;
Wherein, the eye feature information includes: fixation time information, fixation times information, twitching of the eyelid range information and pupil
At least one of in the size information of hole.
The terminal device of the embodiment of the present invention, further includes:
Logging modle 203, within a preset period of time, record terminal device to receive user face when default input instructs
The expressive features information of portion's expression and the event sets executed after receiving default input;
Module 204 is established, executes the most event of number and the user's face table for establishing in the event sets
The incidence relation of the expressive features information of feelings.
The terminal device of the embodiment of the present invention, when the default input instruction instructs for the input to multimedia file;
The processing module 202 is used to determine the type of the multimedia file.
The terminal device of the embodiment of the present invention, the processing module 202 are used to push target message to user.
The terminal device of the embodiment of the present invention obtains user's face expression in the default input instruction for receiving user
Expressive features information;In the expressive features information of the expressive features information and pre-recorded target user's facial expression
In the case where matching, performance objective event, the object event is to close with the expressive features information of target user's facial expression
The event of connection.Corresponding target thing can be performed quickly according to the expressive features information of user's face expression in the embodiment of the present invention
Part realizes the purpose for carrying out personalized behavior anticipation for individual user behavior, and goes to be manually operated again without user, saves
The operating time is saved.
A kind of hardware structural diagram of Fig. 4 terminal of each embodiment to realize the present invention, the terminal 400 include but not
It is limited to: radio frequency unit 401, network module 402, audio output unit 403, input unit 404, sensor 405, display unit
406, the components such as user input unit 407, interface unit 408, memory 409, processor 410 and power supply 411.This field
Technical staff is appreciated that the restriction of the not structure paired terminal of terminal structure shown in Fig. 4, and terminal may include than illustrating more
More or less component perhaps combines certain components or different component layouts.In embodiments of the present invention, terminal includes
But be not limited to mobile phone, tablet computer, laptop, palm PC, car-mounted terminal, wearable device and pedometer etc..
Wherein, processor 410, for obtaining the table of user's face expression in the default input instruction for receiving user
Feelings characteristic information;In the expressive features information matches of the expressive features information and pre-recorded target user's facial expression
In the case of, performance objective event, the object event be and the expressive features information association of target user's facial expression
Event.
The processor can be realized each process of aforesaid operations embodiment of the method, and can reach identical technical effect,
To avoid repeating, which is not described herein again.
The above-mentioned technical proposal of the embodiment of the present invention obtains user's face in the default input instruction for receiving user
The expressive features information of expression;Believe in the expressive features of the expressive features information and pre-recorded target user's facial expression
It ceases in matched situation, performance objective event, the object event is to believe with the expressive features of target user's facial expression
Cease associated event.Corresponding mesh can be performed quickly according to the expressive features information of user's face expression in the embodiment of the present invention
Mark event realizes the purpose for carrying out personalized behavior anticipation for individual user behavior, and goes to grasp manually again without user
Make, saves the operating time.
It should be understood that the embodiment of the present invention in, radio frequency unit 401 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 410 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 401 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 401 can also by wireless communication system and network and other set
Standby communication.
Terminal provides wireless broadband internet by network module 402 for user and accesses, and such as user is helped to receive and dispatch electricity
Sub- mail, browsing webpage and access streaming video etc..
Audio output unit 403 can be received by radio frequency unit 401 or network module 402 or in memory 409
The audio data of storage is converted into audio signal and exports to be sound.Moreover, audio output unit 403 can also provide and end
The relevant audio output of specific function (for example, call signal receives sound, message sink sound etc.) that end 400 executes.Sound
Frequency output unit 403 includes loudspeaker, buzzer and receiver etc..
Input unit 404 is for receiving audio or video signal.Input unit 404 may include graphics processor
(Graphics Processing Unit, GPU) 4041 and microphone 4042, graphics processor 4041 is in video acquisition mode
Or the image data of the static images or video obtained in image capture mode by image capture apparatus (such as camera) carries out
Reason.Treated, and picture frame may be displayed on display unit 406.Through graphics processor 4041, treated that picture frame can be deposited
Storage is sent in memory 409 (or other storage mediums) or via radio frequency unit 401 or network module 402.Mike
Wind 4042 can receive sound, and can be audio data by such acoustic processing.Treated audio data can be
The format output that mobile communication base station can be sent to via radio frequency unit 401 is converted in the case where telephone calling model.
Terminal 400 further includes at least one sensor 405, such as optical sensor, motion sensor and other sensors.
Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ambient light
Light and shade adjusts the brightness of display panel 4061, and proximity sensor can close display panel when terminal 400 is moved in one's ear
4061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions (generally three axis) and add
The size of speed can detect that size and the direction of gravity when static, can be used to identify terminal posture (such as horizontal/vertical screen switching,
Dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, tap) etc.;Sensor 405 can be with
Including fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, barometer, hygrometer, thermometer,
Infrared sensor etc., details are not described herein.
Display unit 406 is for showing information input by user or being supplied to the information of user.Display unit 406 can wrap
Display panel 4061 is included, liquid crystal display (Liquid Crystal Display, LCD), Organic Light Emitting Diode can be used
Forms such as (Organic Light-Emitting Diode, OLED) configure display panel 4061.
User input unit 407 can be used for receiving the number or character information of input, and generates and set with the user of terminal
It sets and the related key signals of function control inputs.Specifically, user input unit 407 include touch panel 4071 and other
Input equipment 4072.Touch panel 4071, also referred to as touch screen, collect user on it or nearby touch operation (such as
User is using any suitable objects or attachment such as finger, stylus on touch panel 4071 or near touch panel 4071
Operation).Touch panel 4071 may include both touch detecting apparatus and touch controller.Wherein, touch detecting apparatus is examined
The touch orientation of user is surveyed, and detects touch operation bring signal, transmits a signal to touch controller;Touch controller from
Touch information is received on touch detecting apparatus, and is converted into contact coordinate, then gives processor 410, receives processor 410
The order sent simultaneously is executed.Furthermore, it is possible to using multiple types such as resistance-type, condenser type, infrared ray and surface acoustic waves
Realize touch panel 4071.In addition to touch panel 4071, user input unit 407 can also include other input equipments 4072.
Specifically, other input equipments 4072 can include but is not limited to physical keyboard, function key (such as volume control button, switch
Key etc.), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 4071 can be covered on display panel 4061, when touch panel 4071 is detected at it
On or near touch operation after, send processor 410 to determine the type of touch event, be followed by subsequent processing device 410 according to touching
The type for touching event provides corresponding visual output on display panel 4061.Although in Fig. 4, touch panel 4071 and display
Panel 4061 is the function that outputs and inputs of realizing terminal as two independent components, but in certain embodiments, it can
The function that outputs and inputs of terminal is realized so that touch panel 4071 and display panel 4061 is integrated, is not limited herein specifically
It is fixed.
Interface unit 408 is the interface that external device (ED) is connect with terminal 400.For example, external device (ED) may include it is wired or
Wireless head-band earphone port, external power supply (or battery charger) port, wired or wireless data port, memory card port,
For connecting port, the port audio input/output (I/O), video i/o port, ear port of the device with identification module
Etc..Interface unit 408 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) and will
One or more elements that the input received is transferred in terminal 400 or can be used for terminal 400 and external device (ED) it
Between transmit data.
Memory 409 can be used for storing software program and various data.Memory 409 can mainly include storing program area
The storage data area and, wherein storing program area can (such as the sound of application program needed for storage program area, at least one function
Sound playing function, image player function etc.) etc.;Storage data area can store according to mobile phone use created data (such as
Audio data, phone directory etc.) etc..In addition, memory 409 may include high-speed random access memory, it can also include non-easy
The property lost memory, a for example, at least disk memory, flush memory device or other volatile solid-state parts.
Processor 410 is the control centre of terminal, using the various pieces of various interfaces and the entire terminal of connection, is led to
It crosses operation or executes the software program and/or module being stored in memory 409, and call and be stored in memory 409
Data execute the various functions and processing data of terminal, to carry out integral monitoring to terminal.Processor 410 may include one
Or multiple processing units;Preferably, processor 410 can integrate application processor and modem processor, wherein application processing
The main processing operation system of device, user interface and application program etc., modem processor mainly handles wireless communication.It can manage
Solution, above-mentioned modem processor can not also be integrated into processor 410.
Terminal 400 can also include the power supply 411 (such as battery) powered to all parts, it is preferred that power supply 411 can be with
It is logically contiguous by power-supply management system and processor 410, thus by power-supply management system realize management charging, electric discharge, with
And the functions such as power managed.
In addition, terminal 400 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal, including processor, and memory stores on a memory simultaneously
The computer program that can be run on the processor realizes that aforesaid operations method is real when the computer program is executed by processor
Each process of example is applied, and identical technical effect can be reached, to avoid repeating, which is not described herein again.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, realizes each process of aforesaid operations embodiment of the method when which is executed by processor, and can reach
Identical technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium is deposited Ru read-only
Reservoir (Read-Only Memory, abbreviation ROM), random access memory (Random Access Memory, abbreviation RAM),
Magnetic or disk etc..
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row
His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and
And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do
There is also other identical elements in the process, method of element, article or device.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side
Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases
The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art
The part contributed out can be embodied in the form of software products, which is stored in a storage medium
In (such as ROM/RAM, magnetic disk, CD), including some instructions are used so that a terminal (can be mobile phone, computer, service
Device, air conditioner or network equipment etc.) execute method described in each embodiment of the present invention.
The embodiment of the present invention is described with above attached drawing, but the invention is not limited to above-mentioned specific
Embodiment, the above mentioned embodiment is only schematical, rather than restrictive, those skilled in the art
Under the inspiration of the present invention, without breaking away from the scope protected by the purposes and claims of the present invention, it can also make very much
Form belongs within protection of the invention.
Claims (11)
1. a kind of operating method is applied to terminal device characterized by comprising
In the default input instruction for receiving user, the expressive features information of user's face expression is obtained;
In the case where the expressive features information matches of the expressive features information and pre-recorded target user's facial expression,
Performance objective event, the object event are the event with the expressive features information association of target user's facial expression.
2. operating method according to claim 1, which is characterized in that the expressive features packet of the user's face expression
It includes: the eye feature information of user;
The performance objective event, comprising:
According to the eye feature information, the target area that the object event is needed to be implemented in terminal device and target pair are determined
As;
The target object performance objective is operated in the target area;
Wherein, the eye feature information includes: that fixation time information, fixation times information, twitching of the eyelid range information and pupil are big
At least one of in small information.
3. operating method according to claim 1, which is characterized in that in the default input instruction for receiving user, obtain
Before taking the expressive features information of family facial expression, further includes:
Within a preset period of time, the expressive features information of user's face expression when record terminal device receives default input instruction
And the event sets executed after receiving default input;
It establishes in the event sets and executes the pass of the expressive features information of the most event of number and the user's face expression
Connection relationship.
4. operating method according to claim 1, which is characterized in that in the default input instruction for multimedia file
Input instruction when;
The performance objective event, comprising:
Determine the type of the multimedia file.
5. operating method according to claim 1, which is characterized in that the performance objective event, comprising:
Target message is pushed to user.
6. a kind of terminal device characterized by comprising
Module is obtained, for obtaining the expressive features information of user's face expression in the default input instruction for receiving user;
Processing module, for the expressive features information in the expressive features information and pre-recorded target user's facial expression
In matched situation, performance objective event, the object event is the expressive features information with target user's facial expression
Associated event.
7. terminal device according to claim 6, which is characterized in that the expressive features packet of the user's face expression
It includes: the eye feature information of user;
The processing module includes:
Submodule is determined, for determining and needing to be implemented the object event in terminal device according to the eye feature information
Target area and target object;
Implementation sub-module, for being operated in the target area to the target object performance objective;
Wherein, the eye feature information includes: that fixation time information, fixation times information, twitching of the eyelid range information and pupil are big
At least one of in small information.
8. terminal device according to claim 6, which is characterized in that further include:
Logging modle, within a preset period of time, record terminal device to receive user's face expression when default input instructs
Expressive features information and the event sets that are executed after receiving default input;
Module is established, for establishing in the event sets expression for executing number most events and the user's face expression
The incidence relation of characteristic information.
9. terminal device according to claim 6, which is characterized in that in the default input instruction for multimedia file
Input instruction when;
The processing module is used to determine the type of the multimedia file.
10. terminal device according to claim 6, which is characterized in that the processing module is used to push target to user
Message.
11. a kind of terminal device, which is characterized in that including processor, memory and be stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 5 is realized when the computer program is executed by the processor
Any one of described in operating method the step of.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910129052.1A CN109902606B (en) | 2019-02-21 | 2019-02-21 | Operation method and terminal equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910129052.1A CN109902606B (en) | 2019-02-21 | 2019-02-21 | Operation method and terminal equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109902606A true CN109902606A (en) | 2019-06-18 |
CN109902606B CN109902606B (en) | 2021-03-12 |
Family
ID=66945108
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910129052.1A Active CN109902606B (en) | 2019-02-21 | 2019-02-21 | Operation method and terminal equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109902606B (en) |
Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102754111A (en) * | 2009-08-13 | 2012-10-24 | 优福特有限公司 | System of automated management of event information |
CN103760761A (en) * | 2014-01-28 | 2014-04-30 | 北京百纳威尔科技有限公司 | Method and device for controlling alarm clock of terminal |
CN104216508A (en) * | 2013-05-31 | 2014-12-17 | 中国电信股份有限公司 | Method and device for operating function key through eye movement tracking technique |
CN104410911A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video emotion tagging-based method for assisting identification of facial expression |
CN104463231A (en) * | 2014-12-31 | 2015-03-25 | 合一网络技术(北京)有限公司 | Error correction method used after facial expression recognition content is labeled |
CN106126017A (en) * | 2016-06-20 | 2016-11-16 | 北京小米移动软件有限公司 | Intelligent identification Method, device and terminal unit |
CN106951100A (en) * | 2017-05-27 | 2017-07-14 | 珠海市魅族科技有限公司 | Color display, device, terminal and computer-readable recording medium |
CN107633098A (en) * | 2017-10-18 | 2018-01-26 | 维沃移动通信有限公司 | A kind of content recommendation method and mobile terminal |
CN107665334A (en) * | 2017-09-11 | 2018-02-06 | 广东欧珀移动通信有限公司 | Intelligent control method and device based on expression |
CN107809515A (en) * | 2017-11-23 | 2018-03-16 | 维沃移动通信有限公司 | A kind of display control method and mobile terminal |
CN108664288A (en) * | 2018-05-14 | 2018-10-16 | 维沃移动通信有限公司 | A kind of image interception method and mobile terminal |
CN108734320A (en) * | 2018-05-09 | 2018-11-02 | 北京邦邦共赢网络科技有限公司 | A kind of office procedure and device |
CN108762493A (en) * | 2018-05-15 | 2018-11-06 | 维沃移动通信有限公司 | A kind of method and mobile terminal of control application program |
CN109240759A (en) * | 2018-08-01 | 2019-01-18 | Oppo广东移动通信有限公司 | Application program launching method, device, terminal device and readable storage medium storing program for executing |
-
2019
- 2019-02-21 CN CN201910129052.1A patent/CN109902606B/en active Active
Patent Citations (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102754111A (en) * | 2009-08-13 | 2012-10-24 | 优福特有限公司 | System of automated management of event information |
CN104216508A (en) * | 2013-05-31 | 2014-12-17 | 中国电信股份有限公司 | Method and device for operating function key through eye movement tracking technique |
CN103760761A (en) * | 2014-01-28 | 2014-04-30 | 北京百纳威尔科技有限公司 | Method and device for controlling alarm clock of terminal |
CN104410911A (en) * | 2014-12-31 | 2015-03-11 | 合一网络技术(北京)有限公司 | Video emotion tagging-based method for assisting identification of facial expression |
CN104463231A (en) * | 2014-12-31 | 2015-03-25 | 合一网络技术(北京)有限公司 | Error correction method used after facial expression recognition content is labeled |
CN106126017A (en) * | 2016-06-20 | 2016-11-16 | 北京小米移动软件有限公司 | Intelligent identification Method, device and terminal unit |
CN106951100A (en) * | 2017-05-27 | 2017-07-14 | 珠海市魅族科技有限公司 | Color display, device, terminal and computer-readable recording medium |
CN107665334A (en) * | 2017-09-11 | 2018-02-06 | 广东欧珀移动通信有限公司 | Intelligent control method and device based on expression |
CN107633098A (en) * | 2017-10-18 | 2018-01-26 | 维沃移动通信有限公司 | A kind of content recommendation method and mobile terminal |
CN107809515A (en) * | 2017-11-23 | 2018-03-16 | 维沃移动通信有限公司 | A kind of display control method and mobile terminal |
CN108734320A (en) * | 2018-05-09 | 2018-11-02 | 北京邦邦共赢网络科技有限公司 | A kind of office procedure and device |
CN108664288A (en) * | 2018-05-14 | 2018-10-16 | 维沃移动通信有限公司 | A kind of image interception method and mobile terminal |
CN108762493A (en) * | 2018-05-15 | 2018-11-06 | 维沃移动通信有限公司 | A kind of method and mobile terminal of control application program |
CN109240759A (en) * | 2018-08-01 | 2019-01-18 | Oppo广东移动通信有限公司 | Application program launching method, device, terminal device and readable storage medium storing program for executing |
Also Published As
Publication number | Publication date |
---|---|
CN109902606B (en) | 2021-03-12 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110413364A (en) | A kind of information processing method and terminal | |
CN107864353B (en) | A kind of video recording method and mobile terminal | |
CN109491738A (en) | A kind of control method and terminal device of terminal device | |
CN109582475A (en) | A kind of sharing method and terminal | |
CN107635110A (en) | A kind of video interception method and terminal | |
CN109005336A (en) | A kind of image capturing method and terminal device | |
CN109062411A (en) | A kind of screen luminance adjustment method and mobile terminal | |
CN109862266A (en) | A kind of image sharing method and terminal | |
CN109327672A (en) | A kind of video call method and terminal | |
CN109522278A (en) | A kind of file memory method and terminal device | |
CN109085963A (en) | A kind of interface display method and terminal device | |
CN108123999A (en) | A kind of information push method and mobile terminal | |
CN110471589A (en) | Information display method and terminal device | |
CN110096203A (en) | A kind of screenshot method and mobile terminal | |
CN110442279A (en) | A kind of message method and mobile terminal | |
CN108984143A (en) | A kind of display control method and terminal device | |
CN109448069A (en) | A kind of template generation method and mobile terminal | |
CN109271262A (en) | A kind of display methods and terminal | |
CN108664288A (en) | A kind of image interception method and mobile terminal | |
CN110162707A (en) | A kind of information recommendation method, terminal and computer readable storage medium | |
CN110086998A (en) | A kind of image pickup method and terminal | |
CN109783722A (en) | A kind of content outputting method and terminal device | |
CN109164908A (en) | A kind of interface control method and mobile terminal | |
CN108959585A (en) | A kind of expression picture acquisition methods and terminal device | |
CN109166164A (en) | A kind of generation method and terminal of expression picture |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |