CN109189953A - A kind of selection method and device of multimedia file - Google Patents
A kind of selection method and device of multimedia file Download PDFInfo
- Publication number
- CN109189953A CN109189953A CN201810981573.5A CN201810981573A CN109189953A CN 109189953 A CN109189953 A CN 109189953A CN 201810981573 A CN201810981573 A CN 201810981573A CN 109189953 A CN109189953 A CN 109189953A
- Authority
- CN
- China
- Prior art keywords
- user
- emotion
- type
- facial image
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/174—Facial expression recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
- G06V40/16—Human faces, e.g. facial parts, sketches or expressions
- G06V40/168—Feature extraction; Face representation
Landscapes
- Engineering & Computer Science (AREA)
- Health & Medical Sciences (AREA)
- Oral & Maxillofacial Surgery (AREA)
- General Health & Medical Sciences (AREA)
- Human Computer Interaction (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- User Interface Of Digital Computer (AREA)
Abstract
The embodiment of the invention discloses a kind of selection method of multimedia file and devices, this method comprises: obtaining the facial image of user;The type of emotion of expressive features identification user based on facial image;Using with the matched multimedia file of the type of emotion of user as destination multimedia file.The embodiment of the present invention solves the problems, such as song recommendations demand when song recommendations method in the prior art is unable to satisfy user's difference mood and affects user experience.
Description
Technical field
This application involves the selection field of multimedia file more particularly to the selection methods and dress of a kind of multimedia file
It sets.
Background technique
With the development of communication technology, the thing that terminal device can be done is more and more, e.g., takes pictures, chats, listens to music
Deng.Wherein, music can discharge mood by listening to music as a part essential in daily life, people.Music
Playing function can satisfy the demand that people appreciate music whenever and wherever possible, meanwhile, it is desirably audible under different moods
The types of songs arrived is also different.
Existing song recommendations method is typically all to be recommended by user tag, e.g., according to age, gender, area etc.
It is configured label.After label is arranged, server can select song recommendations to user in corresponding label library, but
It is that these labels are usually the external feature of user, can immobilizes, the types of songs recommended as a result, is also kept not substantially
Song recommendations demand when becoming, thus being unable to satisfy user's difference mood, to influence the usage experience of user.
Summary of the invention
The embodiment of the present invention provides the selection method and device of a kind of multimedia file, to solve existing song recommendations side
Song recommendations demand when method is unable to satisfy user's difference mood and the problem of affect user experience.
In order to solve the above technical problems, the present invention is implemented as follows:
In a first aspect, providing a kind of selection method of multimedia file comprising:
Obtain the facial image of user;
Expressive features based on the facial image identify the type of emotion of the user;
Using with the matched multimedia file of the type of emotion of the user as destination multimedia file.
Second aspect provides a kind of selection device of multimedia file comprising:
First image collecting device, for obtaining the facial image of user;
Type of emotion confirmation unit, for identifying the mood class of the user according to the expressive features of the facial image
Type;
Multimedia file confirmation unit, for mesh will to be determined as with the matched multimedia file of the type of emotion of the user
Mark multimedia file.
The third aspect additionally provides a kind of terminal device comprising: memory, processor and it is stored in the memory
Computer program that is upper and can running on the processor, realization such as the when the computer program is executed by the processor
The step of method described in one side.
Fourth aspect also provides a kind of computer-readable medium, stores computer program on the computer-readable medium,
The step of method as described in relation to the first aspect is realized when the computer program is executed by processor.
In embodiments of the present invention, the selection method of multimedia file passes through the expressive features to acquired facial image
It is identified, to obtain the type of emotion of user, thus by mesh is determined as with the matched multimedia file of the type of emotion of user
Mark multimedia file.In this way, the selection method of the multimedia file of the embodiment of the present invention can be selected according to the current mood of user
The music to match with type of emotion is selected, so as to recommend to work as front center with user to user when user possesses different moods
The song that feelings are consistent, to improve the usage experience of user.That is, the song recommendations solved when being unable to satisfy user's difference mood need
The problem of asking and affecting user experience.
Detailed description of the invention
The drawings described herein are used to provide a further understanding of the present invention, constitutes a part of the invention, this hair
Bright illustrative embodiments and their description are used to explain the present invention, and are not constituted improper limitations of the present invention.In the accompanying drawings:
Fig. 1 is the schematic flow chart of the selection method of multimedia file according to an embodiment of the invention;
Fig. 2 is the schematic flow chart of the selection method of multimedia file in accordance with another embodiment of the present invention;
Fig. 3 is the schematic flow chart of the selection method of the multimedia file of further embodiment according to the present invention;
Fig. 4 is the schematic flow chart of the selection method of the multimedia file of further embodiment according to the present invention;
Fig. 5 is the schematic flow chart of the selection method of multimedia file accord to a specific embodiment of that present invention;
Fig. 6 is the schematic flow chart of the selection method of the multimedia file of another specific embodiment according to the present invention;
Fig. 7 is the schematic presentation figure at terminal device interface according to an embodiment of the invention;
Fig. 8 is the schematic block diagram of the selection device of multimedia file according to an embodiment of the invention;
Fig. 9 is the schematic block diagram of the selection device of multimedia file in accordance with another embodiment of the present invention;
Figure 10 is the structural schematic diagram of terminal device according to an embodiment of the invention.
Specific embodiment
In order to make the object, technical scheme and advantages of the embodiment of the invention clearer, below in conjunction with of the invention specific real
It applies example and technical solution of the embodiment of the present invention is clearly and completely described in corresponding attached drawing.Obviously, described embodiment
It is only a part of the embodiment of the embodiment of the present invention, instead of all the embodiments.Based on the embodiments of the present invention, this field is general
Logical technical staff every other embodiment obtained without making creative work belongs to what the present invention protected
Range.
The technical solution provided below in conjunction with attached drawing, each embodiment that the present invention will be described in detail.
Fig. 1 is the schematic flow chart of the selection method of multimedia file according to an embodiment of the invention, to solve
Song recommendations demand when existing song recommendations method is unable to satisfy user's difference mood and the problem of affect user experience.
The selection method of the multimedia file of the embodiment of the present invention can include:
S102. the facial image of user is obtained.Wherein, it may be in response to the starting of the multimedia playing module of terminal device,
Pass through the image acquisition device facial image of terminal device.That is, user can be by clicking or touching starting multimedia
Module, or start multimedia playing module automatically by other processing operations, at this point, image collecting device (such as camera) is adopted
Collect the facial image of user.
S104. the type of emotion of the expressive features identification user based on facial image.
It should be noted that the type of emotion of the expressive features identification user based on facial image may include in expression-feelings
The characteristics of image of the input terminal input facial image of thread model, to obtain the mood of user in expression-mood model output end
Type, wherein expression-mood model is based on the facial image feature and the corresponding type of emotion training of expression under a variety of expressions
It obtains.That is, identifying the current mood of user (i.e. by the facial image of acquisition by facial expression recognition technology
The type of emotion of user), to prepare to select corresponding multimedia file according to user mood.
S106. using with the matched multimedia file of the type of emotion of user as destination multimedia file.
It follows that obtaining the mood class of user when identifying by the expressive features to acquired facial image
After type, it can will be determined as destination multimedia file with the matched multimedia file of the type of emotion of user.In this way, the present invention is implemented
The music that the selection method of the multimedia file of example can match according to the current mood selection of user with type of emotion, thus
It can recommend the song being consistent with user's current mood to user when user possesses different moods, to improve the use of user
Experience.That is, solving the problems, such as song recommendations demand when being unable to satisfy user's difference mood and affecting user experience.
Also, there may be the recommended method based on user mood to recommend song track to user in the prior art immediately, but
The mood of user typically is captured by collecting some social dynamics of user's hair, and this method generally compares lag,
It is easy the mood current to user to cause to judge by accident, and for not liking for social user, this method is also invalid.
And the multimedia selecting party rule of the embodiment of the present invention can identify the current mood class of user by facial image identification technology
Type not only has real-time, but also even if user does not like social activity, and the multimedia selection method of the embodiment of the present invention equally can be with
The multimedia file for being more conform with user's current mood is selected, is appreciated for user.
In the above-described embodiments, as shown in Fig. 2, the selection method of multimedia file further include:
If S105. expression-mood model does not export the corresponding type of emotion of characteristics of image of facial image, reacquire
The facial image of user, and using the characteristics of image of the facial image of reacquisition as the input of expression-mood model, to obtain
The type of emotion of user.
It should be understood that the type of emotion that one-off recognition does not go out user is likely to according to acquired facial image, at this point, can
Reacquire user facial image, and after the same method (i.e. using the characteristics of image of the facial image of reacquisition as
The input of expression-mood model) obtain the type of emotion of user.Wherein, the operation of facial image is reacquired can include:
The facial image for responding terminal device unlocks operation, reacquires the facial image of user.Certainly, it obtains again here
The mode of operation for taking facial image can also be other image acquisition modes, as long as the characteristics of image of available facial image is
Can, be not limited to the present embodiment the image acquisition mode.
Certainly, it is identified according to type of emotion of the facial image of reacquisition to user, may be still failed, this
When, as shown in figure 3, the multimedia selection method of the embodiment of the present invention may also include that
If S115. expression-mood model does not export the corresponding type of emotion of characteristics of image of the facial image of reacquisition,
The input for then responding user obtains the type of emotion of user.Wherein, the input of user can be the parameter of the dynamic input of user hand,
The parameter that can be inputted by the mood replacement button in user's click or touch control terminal equipment.
I.e. when identifying to the type of emotion of user according to the facial image of acquisition and identifying invalid, in order to improve use
The usage experience at family, user can input its current type of emotion by way of being manually entered.Alternatively, in general terminal device
It may be provided with mood replacement button, select or replace for user's type of emotion current to its.At this point, can directly obtain
The type of emotion of user.
In the above-described embodiments, destination multimedia file may include music file or video file, as shown in figure 4, this hair
The multimedia selection method of bright embodiment may also include that
S108. destination multimedia file is played.
It should be understood that obtain the current type of emotion of user according to facial image identification technology, and by the mood class with user
After the matched multimedia file of type is determined as destination multimedia file, that is, it can play the destination multimedia file.In this way, of the invention
The music that the selection method of the multimedia file of embodiment can match according to the current mood selection of user with type of emotion,
So as to recommend and play the song being consistent with user's current mood to user when user possesses different moods, to improve
The usage experience of user.That is, solving song recommendations demand when being unable to satisfy user's difference mood and influencing user and use body
The problem of testing.
It should be noted that some executing subjects of each step of above-described embodiment institute providing method can be same set
It is standby, alternatively, this method can also be by distinct device as executing subject.For example, the executing subject of step S102, S104 can be same
One executing subject, and the executing subject of step S106 then can be another executing subject (such as control unit);For another example, step
The executing subject of S102, S104, S106 all can be the same executing subjects etc..
In a specific embodiment, it is illustrated in conjunction with Fig. 5 to Fig. 7, the multimedia file choosing of the embodiment of the present invention
Selection method can be with are as follows:
The first, the music module for the equipment that opens a terminal;
The second, the backstage of terminal device starts camera automatically and takes pictures to user, and user can be not detected by phase
Machine shoots it.
Third, the mood that acquired image is identified to user according to facial expression recognition technology.
If the 4th, Expression Recognition success, saves the current mood of user;And recognition failures, then due to opening music
When module, the image of acquisition is not necessarily accurately, so the face that also unlock can be used to obtain when recognition failures is known
Not, at this point, when identifying successfully, current mood is saved.
Wherein, it then can be selected according to user by the replacement mood button in Fig. 7, the mood that user selects oneself current
The mood selected refreshes the list of songs currently recommended.
5th, after getting user's current mood, current mood can be reported to server;
6th, server can select list of songs in the mood library classified according to the data about mood;
7th, selected list of songs is issued to client, selects or appreciates for user.
It can be seen that the multimedia file selection method of the embodiment of the present invention is by using facial expression recognition technology, root
According to the expression of user, the mood of active user is identified, mood data is reported server, classified by the mood in server
Library recommends corresponding list of songs to be given to user, the current mood of the user that releives, so as to compensate for basis well
User tag recommends the deficiency of song.
In addition, the multimedia file selection method of the embodiment of the present invention can by face know Expression Recognition technology, according to
Family mood recommendation, this method can also be used in the fields such as text, image, video, such as: the recommendation of news stream, short-sighted frequency
Commending contents etc..
A kind of selection device of multimedia file can also be provided in the embodiment of the present invention, as shown in figure 8, the selection device can wrap
The first image collecting device 802 is included, for obtaining the facial image of user;Type of emotion confirmation unit 804, for according to face
The type of emotion of the expressive features identification user of image;Multimedia file confirmation unit 806, for by the type of emotion with user
Matched multimedia file is determined as destination multimedia file.Wherein, type of emotion confirmation unit 804 can be configured in expression-
The characteristics of image of the input terminal input facial image of mood model, to obtain the feelings of user in expression-mood model output end
Thread type, wherein expression-mood model is based on the facial image feature and the corresponding type of emotion instruction of expression under a variety of expressions
It gets.
Wherein, the first image collecting device 802 may be in response to the starting of the multimedia playing module of terminal device, acquisition dress
Set acquisition facial image.Here, user can grasp by clicking or touching starting multimedia playing module, or by other processing
Make automatic starting multimedia playing module, at this point, the face figure of the first image collecting device 802 (such as camera) acquisition user
Picture.
Since the selection device of the multimedia file of the embodiment of the present invention is worked as through type of emotion confirmation unit 804 to first
The expressive features of facial image acquired in image collecting device 802 are identified, after obtaining the type of emotion of user, can be passed through
Multimedia file confirmation unit 806 will be determined as destination multimedia file with the matched multimedia file of the type of emotion of user.
In this way, the selection device of the multimedia file of the embodiment of the present invention can be selected according to the current mood of user and type of emotion phase
Matched music, so as to recommend the song being consistent with user's current mood to user when user possesses different moods,
To improve the usage experience of user.That is, solving song recommendations demand when being unable to satisfy user's difference mood and influencing user
The problem of usage experience.
In addition, the multimedia selection device of the embodiment of the present invention identifies that user is current by facial image identification technology
Type of emotion not only has real-time, but also even if user does not like social activity, the multimedia selection method of the embodiment of the present invention is same
Sample can choose out the multimedia file for being more conform with user's current mood, appreciate for user.
In the above-described embodiments, type of emotion confirmation unit 804 is configured to: if expression-mood model does not export face
The corresponding type of emotion of the characteristics of image of image, then reacquire the facial image of user, and by the facial image of reacquisition
Characteristics of image as the input of expression-mood model, to obtain the type of emotion of user.
That is, the type of emotion that one-off recognition does not go out user is likely to according to acquired facial image, at this point,
The facial image of user can be reacquired, and (is made the characteristics of image of the facial image of reacquisition after the same method
For the input of expression-mood model) obtain the type of emotion of user.Wherein, the operation for reacquiring facial image can be by
Two image collecting devices 808 are acquired.
That is, the selection device of multimedia file may also include that the second image collecting device 808, for responding terminal device
Facial image unlock operation, reacquire the facial image of user.Certainly, the mode of operation of facial image is reacquired here
It can also be other image acquisition modes, as long as the characteristics of image of available facial image, be not limited to the present embodiment described
Image acquisition mode.
If being identified still failed, present invention implementation according to type of emotion of the facial image of reacquisition to user
The type of emotion confirmation unit 804 of example may be additionally configured to: if expression-mood model does not export the facial image of reacquisition yet
The corresponding type of emotion of characteristics of image, the then input for responding user obtain the type of emotion of user.Wherein, the input of user can be with
The parameter being manually entered for user, or the ginseng that the mood replacement button in user's click or touch control terminal equipment is inputted
Number.
I.e. when identifying to the type of emotion of user according to the facial image of acquisition and identifying invalid, in order to improve use
The usage experience at family, user can input its current type of emotion by way of being manually entered.Alternatively, in general terminal device
It may be provided with mood replacement button, select or replace for user's type of emotion current to its.At this point, can directly obtain
The type of emotion of user.
In any of the above-described embodiment, the selection device of multimedia file may also include broadcast unit 810, for playing
Destination multimedia file, wherein destination multimedia file may include music file or video file.So, it single can broadcast here
Playback music file or video file, can also simultaneously playing music and video file.
It should be understood that obtain the current type of emotion of user according to facial image identification technology, and by the mood class with user
After the matched multimedia file of type is determined as destination multimedia file, the destination multimedia can be played by broadcast unit 810
File.In this way, the selection device of the multimedia file of the embodiment of the present invention can be selected according to the current mood of user and mood
The music that type matches, so as to recommend when user possesses different moods to user and play and user's current mood
The song being consistent, to improve the usage experience of user.That is, solving song recommendations demand when being unable to satisfy user's difference mood
And the problem of affecting user experience.
Multimedia selection device or method described in any of the above-described embodiment are by face recognition technology, according to user's
Expression identifies the mood of user in real time, and when user is opening mobile phone, when using music module, backstage starts photo module, passes through
Face recognition technology identifies the expression of active user, further according to expression library judge the current mood of user (by recognition of face with
And expression library identifies the current mood of user in real time), recommend the song of corresponding mood (according to the heart of user finally by server
Feelings recommend song from corresponding mood library online).To improve the usage experience of user.
The hardware structural diagram of Figure 10 terminal device of embodiment to realize the present invention.As shown in Figure 10, which sets
Standby 1000 include but is not limited to: radio frequency unit 1001, audio output unit 1003, input unit 1004, passes network module 1002
Sensor 1005, display unit 1006, user input unit 1007, interface unit 1008, memory 1009, processor 1010, with
And the equal components of power supply 1011.It will be understood by those skilled in the art that terminal device structure shown in Figure 10 was not constituted to end
The restriction of end equipment, terminal device may include perhaps combining certain components or difference than illustrating more or fewer components
Component layout.In embodiments of the present invention, terminal device includes but is not limited to mobile phone, tablet computer, laptop, palm
Computer, car-mounted terminal, wearable device and pedometer etc..
Wherein, processor 1010, for executing following methods:
Obtain the facial image of user;
The type of emotion of expressive features identification user based on facial image;
Using with the matched multimedia file of the type of emotion of user as destination multimedia file.
Since the selection method of the multimedia file in the embodiment of the present invention passes through the expression to acquired facial image
Feature is identified, to obtain the type of emotion of user, thus by determining with the matched multimedia file of the type of emotion of user
For destination multimedia file.In this way, the selection method of the multimedia file of the embodiment of the present invention can be current according to user feelings
The music that thread selection matches with type of emotion, so as to recommend to work as with user to user when user possesses different moods
The song that preceding mood is consistent, to improve the usage experience of user.That is, the song solved when being unable to satisfy user's difference mood pushes away
The problem of recommending demand and affecting user experience.
It should be understood that the embodiment of the present invention in, radio frequency unit 1001 can be used for receiving and sending messages or communication process in, signal
Send and receive, specifically, by from base station downlink data receive after, to processor 1010 handle;In addition, by uplink
Data are sent to base station.In general, radio frequency unit 1001 includes but is not limited to antenna, at least one amplifier, transceiver, coupling
Device, low-noise amplifier, duplexer etc..In addition, radio frequency unit 1001 can also by wireless communication system and network and other
Equipment communication.
Terminal device provides wireless broadband internet by network module 1002 for user and accesses, and such as user is helped to receive
It sends e-mails, browse webpage and access streaming video etc..
Audio output unit 1003 can be received by radio frequency unit 1001 or network module 1002 or in memory
The audio data stored in 1009 is converted into audio signal and exports to be sound.Moreover, audio output unit 1003 can be with
Audio output relevant to the specific function that terminal device 1000 executes is provided (for example, call signal receives sound, message sink
Sound etc.).Audio output unit 1003 includes loudspeaker, buzzer and receiver etc..
Input unit 1004 is for receiving audio or video signal.Input unit 1004 may include graphics processor
(Graphics Processing Unit, GPU) 10041 and microphone 10042, graphics processor 10041 are captured in video
In mode or image capture mode by image capture apparatus (such as camera) obtain static images or video image data into
Row processing.Treated, and picture frame may be displayed on display unit 1006.Through treated the picture frame of graphics processor 10041
It can store in memory 1009 (or other storage mediums) or carried out via radio frequency unit 1001 or network module 1002
It sends.Microphone 10042 can receive sound, and can be audio data by such acoustic processing.Audio that treated
Data can be converted to the lattice that mobile communication base station can be sent to via radio frequency unit 1001 in the case where telephone calling model
Formula output.
Terminal device 1000 further includes at least one sensor 1005, for example, optical sensor, motion sensor and other
Sensor.Specifically, optical sensor includes ambient light sensor and proximity sensor, wherein ambient light sensor can be according to ring
The light and shade of border light adjusts the brightness of display panel 10061, proximity sensor can when terminal device 1000 is moved in one's ear,
Close display panel 10061 and/or backlight.As a kind of motion sensor, accelerometer sensor can detect in all directions
The size of (generally three axis) acceleration, can detect that size and the direction of gravity, can be used to identify terminal device appearance when static
State (such as horizontal/vertical screen switching, dependent game, magnetometer pose calibrating), Vibration identification correlation function (such as pedometer, percussion)
Deng;Sensor 1005 can also include fingerprint sensor, pressure sensor, iris sensor, molecule sensor, gyroscope, gas
Meter, hygrometer, thermometer, infrared sensor etc. are pressed, wherein infrared sensor can be by emitting and receiving infrared flash ranging
The distance between object and terminal device are measured, details are not described herein.
Display unit 1006 is for showing information input by user or being supplied to the information of user.Display unit 1006 can
Including display panel 10061, liquid crystal display (Liquid Crystal Display, LCD), organic light-emitting diodes can be used
Forms such as (Organic Light-Emitting Diode, OLED) are managed to configure display panel 10061.
User input unit 1007 can be used for receiving the number or character information of input, and generate the use with terminal device
Family setting and the related key signals input of function control.Specifically, user input unit 1007 include touch panel 10071 with
And other input equipments 10072.Touch panel 10071, also referred to as touch screen collect the touch behaviour of user on it or nearby
Make (for example user uses any suitable objects or attachment such as finger, stylus on touch panel 10071 or in touch panel
Operation near 10071).Touch panel 10071 may include both touch detecting apparatus and touch controller.Wherein, it touches
The touch orientation of detection device detection user is touched, and detects touch operation bring signal, transmits a signal to touch controller;
Touch controller receives touch information from touch detecting apparatus, and is converted into contact coordinate, then gives processor 1010,
It receives the order that processor 1010 is sent and is executed.Furthermore, it is possible to using resistance-type, condenser type, infrared ray and surface
The multiple types such as sound wave realize touch panel 10071.In addition to touch panel 10071, user input unit 1007 can also include
Other input equipments 10072.Specifically, other input equipments 10072 can include but is not limited to physical keyboard, function key (ratio
Such as volume control button, switch key), trace ball, mouse, operating stick, details are not described herein.
Further, touch panel 10071 can be covered on display panel 10061, when touch panel 10071 detects
After touch operation on or near it, processor 1010 is sent to determine the type of touch event, is followed by subsequent processing device 1010
Corresponding visual output is provided on display panel 10061 according to the type of touch event.Although in Figure 10, touch panel
10071 and display panel 10061 are the functions that outputs and inputs of realizing terminal device as two independent components, but
In some embodiments, touch panel 10071 can be integrated with display panel 10061 and realize outputting and inputting for terminal device
Function, specifically herein without limitation.
Interface unit 1008 is the interface that external device (ED) is connect with terminal device 1000.For example, external device (ED) may include
Wired or wireless headphone port, external power supply (or battery charger) port, wired or wireless data port, storage card
Port, port, the port audio input/output (I/O), video i/o port, earphone for connecting the device with identification module
Port etc..Interface unit 1008 can be used for receiving the input (for example, data information, electric power etc.) from external device (ED) simultaneously
And by one or more elements that the input received is transferred in terminal device 1000 or it can be used in terminal device
Data are transmitted between 1000 and external device (ED).
Memory 1009 can be used for storing software program and various data.Memory 1009 can mainly include storage program
Area and storage data area, wherein storing program area can application program needed for storage program area, at least one function (such as
Sound-playing function, image player function etc.) etc.;Storage data area, which can be stored, uses created data (ratio according to mobile phone
Such as audio data, phone directory) etc..In addition, memory 1009 may include high-speed random access memory, it can also include non-
Volatile memory, for example, at least a disk memory, flush memory device or other volatile solid-state parts.
Processor 1010 is the control centre of terminal device, utilizes each of various interfaces and the entire terminal device of connection
A part by running or execute the software program and/or module that are stored in memory 1009, and calls and is stored in storage
Data in device 1009 execute the various functions and processing data of terminal device, to carry out integral monitoring to terminal device.Place
Managing device 1010 may include one or more processing units;Preferably, processor 1010 can integrate application processor and modulation /demodulation
Processor, wherein the main processing operation system of application processor, user interface and application program etc., modem processor master
Handle wireless communication.It is understood that above-mentioned modem processor can not also be integrated into processor 1010.
Terminal device 1000 can also include the power supply 1011 (such as battery) powered to all parts, it is preferred that power supply
1011 can be logically contiguous by power-supply management system and processor 1010, to realize that management is filled by power-supply management system
The functions such as electricity, electric discharge and power managed.
In addition, terminal device 1000 includes some unshowned functional modules, details are not described herein.
Preferably, the embodiment of the present invention also provides a kind of terminal device, may include processor 1010, memory 1009,
It is stored in the computer program that can be run on memory 1009 and on the processor 1010, the computer program is by processor
1010 realize each process of above-mentioned embodiment of the method shown in FIG. 1 when executing, and can reach identical technical effect, to avoid
It repeats, which is not described herein again.
Further, terminal device may also include charge switch, and charge switch, which can be configured such that, to be greater than or wait in channel disturbance
When preset threshold, the charging week of preset threshold is controllably less than with the corresponding channel disturbance of current communication channel in mapping relations
Phase works.Certainly, channel disturbance is less than preset threshold, and the charge cycle that charge switch then remains current is constant.
The embodiment of the present invention also provides a kind of computer readable storage medium, and meter is stored on computer readable storage medium
Calculation machine program, the computer program realize each process of above-mentioned method shown in FIG. 1 when being executed by processor, and can reach phase
Same technical effect, to avoid repeating, which is not described herein again.Wherein, the computer readable storage medium, such as read-only storage
Device (Read-Only Memory, ROM), random access memory (Random Access Memory, RAM), magnetic disk or light
Disk etc..
It should be understood by those skilled in the art that, the embodiment of the present invention can provide as method, system or computer program
Product.Therefore, complete hardware embodiment, complete software embodiment or reality combining software and hardware aspects can be used in the present invention
Apply the form of example.Moreover, it wherein includes the computer of computer usable program code that the present invention, which can be used in one or more,
The computer program implemented in usable storage medium (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.) produces
The form of product.
It should also be noted that, the terms "include", "comprise" or its any other variant are intended to nonexcludability
It include so that the process, method, commodity or the equipment that include a series of elements not only include those elements, but also to wrap
Include other elements that are not explicitly listed, or further include for this process, method, commodity or equipment intrinsic want
Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including described want
There is also other identical elements in the process, method of element, commodity or equipment.
It will be understood by those skilled in the art that the embodiment of the present invention can provide as method, system or computer program product.
Therefore, complete hardware embodiment, complete software embodiment or embodiment combining software and hardware aspects can be used in the present invention
Form.It is deposited moreover, the present invention can be used to can be used in the computer that one or more wherein includes computer usable program code
The shape for the computer program product implemented on storage media (including but not limited to magnetic disk storage, CD-ROM, optical memory etc.)
Formula.
The above description is only an embodiment of the present invention, is not intended to restrict the invention.For those skilled in the art
For, the invention may be variously modified and varied.All any modifications made within the spirit and principles of the present invention are equal
Replacement, improvement etc., should be included within scope of the presently claimed invention.
Claims (12)
1. a kind of selection method of multimedia file characterized by comprising
Obtain the facial image of user;
Expressive features based on the facial image identify the type of emotion of the user;
Using with the matched multimedia file of the type of emotion of the user as destination multimedia file.
2. the method according to claim 1, wherein the expressive features based on the facial image identify the use
The type of emotion at family, comprising:
The characteristics of image of the facial image is inputted in expression-mood model input terminal, in the expression-mood model
Output end obtains the type of emotion of the user, wherein the expression-mood model is based on the facial image under a variety of expressions
Feature and the corresponding type of emotion training of expression obtain.
3. the method according to claim 1, wherein further include:
If the corresponding type of emotion of characteristics of image that the expression-mood model does not export the facial image, reacquires
The facial image of user, and using the characteristics of image of the facial image of reacquisition as the input of expression-mood model, to obtain
The type of emotion of the user.
4. according to the method described in claim 3, it is characterized by further comprising:
If the corresponding type of emotion of characteristics of image for the facial image that the expression-mood model does not export the reacquisition,
The input for then responding user obtains the type of emotion of the user.
5. according to the method described in claim 3, it is characterized in that, the reacquisition facial image, comprising:
The facial image unlock operation for responding the terminal device, reacquires the facial image of the user.
6. a kind of selection device of multimedia file characterized by comprising
First image collecting device, for obtaining the facial image of user;
Type of emotion confirmation unit, for identifying the type of emotion of the user according to the expressive features of the facial image;
Multimedia file confirmation unit is more for will be determined as target with the matched multimedia file of the type of emotion of the user
Media file.
7. device according to claim 6, which is characterized in that the type of emotion confirmation unit is configured to:
The characteristics of image of the facial image is inputted in expression-mood model input terminal, in the expression-mood model
Output end obtains the type of emotion of the user, wherein the expression-mood model is based on the facial image under a variety of expressions
Feature and the corresponding type of emotion training of expression obtain.
8. device according to claim 6, which is characterized in that the type of emotion confirmation unit is configured to: if described
Expression-mood model does not export the corresponding type of emotion of characteristics of image of the facial image, then reacquires the face of user
Image, and using the characteristics of image of the facial image of reacquisition as the input of expression-mood model, to obtain the user's
Type of emotion.
9. device according to claim 8, which is characterized in that the type of emotion confirmation unit is configured to:
If the corresponding mood class of the characteristics of image for the facial image that the expression-mood model does not export the reacquisition yet
Type, the then input for responding user obtain the type of emotion of the user.
10. device according to claim 8, which is characterized in that further include:
Second image collecting device, the facial image for responding the terminal device unlock operation, reacquire the user
Facial image.
11. a kind of terminal device characterized by comprising memory, processor and be stored on the memory and can be in institute
The computer program run on processor is stated, such as claim 1 to 5 is realized when the computer program is executed by the processor
Any one of described in method the step of.
12. a kind of computer-readable medium, which is characterized in that computer program is stored on the computer-readable medium, it is described
The step of method as described in right wants any one of 1 to 5 is realized when computer program is executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810981573.5A CN109189953A (en) | 2018-08-27 | 2018-08-27 | A kind of selection method and device of multimedia file |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201810981573.5A CN109189953A (en) | 2018-08-27 | 2018-08-27 | A kind of selection method and device of multimedia file |
Publications (1)
Publication Number | Publication Date |
---|---|
CN109189953A true CN109189953A (en) | 2019-01-11 |
Family
ID=64916271
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201810981573.5A Pending CN109189953A (en) | 2018-08-27 | 2018-08-27 | A kind of selection method and device of multimedia file |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109189953A (en) |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640119A (en) * | 2019-02-21 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN110175245A (en) * | 2019-06-05 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Multimedia recommendation method, device, equipment and storage medium |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
CN111326235A (en) * | 2020-01-21 | 2020-06-23 | 京东方科技集团股份有限公司 | Emotion adjusting method, device and system |
CN111414883A (en) * | 2020-03-27 | 2020-07-14 | 深圳创维-Rgb电子有限公司 | Program recommendation method, terminal and storage medium based on face emotion |
CN112883209A (en) * | 2019-11-29 | 2021-06-01 | 阿里巴巴集团控股有限公司 | Recommendation method and processing method, device, equipment and readable medium for multimedia data |
CN114564604A (en) * | 2022-03-01 | 2022-05-31 | 北京字节跳动网络技术有限公司 | Media collection generation method and device, electronic equipment and storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102300163A (en) * | 2011-09-22 | 2011-12-28 | 宇龙计算机通信科技(深圳)有限公司 | Information pushing method, mobile terminal and system |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
CN106357927A (en) * | 2016-10-31 | 2017-01-25 | 维沃移动通信有限公司 | Playing control method and mobile terminal |
CN108733209A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Man-machine interaction method, device, robot and storage medium |
-
2018
- 2018-08-27 CN CN201810981573.5A patent/CN109189953A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102300163A (en) * | 2011-09-22 | 2011-12-28 | 宇龙计算机通信科技(深圳)有限公司 | Information pushing method, mobile terminal and system |
CN105956059A (en) * | 2016-04-27 | 2016-09-21 | 乐视控股(北京)有限公司 | Emotion recognition-based information recommendation method and apparatus |
CN106357927A (en) * | 2016-10-31 | 2017-01-25 | 维沃移动通信有限公司 | Playing control method and mobile terminal |
CN108733209A (en) * | 2018-03-21 | 2018-11-02 | 北京猎户星空科技有限公司 | Man-machine interaction method, device, robot and storage medium |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109640119A (en) * | 2019-02-21 | 2019-04-16 | 百度在线网络技术(北京)有限公司 | Method and apparatus for pushed information |
CN110175245A (en) * | 2019-06-05 | 2019-08-27 | 腾讯科技(深圳)有限公司 | Multimedia recommendation method, device, equipment and storage medium |
CN110334669A (en) * | 2019-07-10 | 2019-10-15 | 深圳市华腾物联科技有限公司 | A kind of method and apparatus of morphological feature identification |
CN110334669B (en) * | 2019-07-10 | 2021-06-08 | 深圳市华腾物联科技有限公司 | Morphological feature recognition method and equipment |
CN112883209A (en) * | 2019-11-29 | 2021-06-01 | 阿里巴巴集团控股有限公司 | Recommendation method and processing method, device, equipment and readable medium for multimedia data |
CN111326235A (en) * | 2020-01-21 | 2020-06-23 | 京东方科技集团股份有限公司 | Emotion adjusting method, device and system |
CN111326235B (en) * | 2020-01-21 | 2023-10-27 | 京东方科技集团股份有限公司 | Emotion adjustment method, equipment and system |
CN111414883A (en) * | 2020-03-27 | 2020-07-14 | 深圳创维-Rgb电子有限公司 | Program recommendation method, terminal and storage medium based on face emotion |
CN114564604A (en) * | 2022-03-01 | 2022-05-31 | 北京字节跳动网络技术有限公司 | Media collection generation method and device, electronic equipment and storage medium |
CN114564604B (en) * | 2022-03-01 | 2023-08-08 | 抖音视界有限公司 | Media collection generation method and device, electronic equipment and storage medium |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN106878809B (en) | A kind of video collection method, playback method, device, terminal and system | |
CN109189953A (en) | A kind of selection method and device of multimedia file | |
CN109683847A (en) | A kind of volume adjusting method and terminal | |
CN107633098A (en) | A kind of content recommendation method and mobile terminal | |
CN108833262B (en) | Session processing method, device, terminal and storage medium | |
CN108920059A (en) | Message treatment method and mobile terminal | |
CN109446775A (en) | A kind of acoustic-controlled method and electronic equipment | |
CN108897473A (en) | A kind of interface display method and terminal | |
CN110162254A (en) | A kind of display methods and terminal device | |
CN108984066A (en) | A kind of application icon display methods and mobile terminal | |
CN109151176A (en) | A kind of information acquisition method and terminal | |
CN109634438A (en) | A kind of control method and terminal device of input method | |
CN109857297A (en) | Information processing method and terminal device | |
CN107862059A (en) | A kind of song recommendations method and mobile terminal | |
CN109085963A (en) | A kind of interface display method and terminal device | |
CN110069675A (en) | A kind of search method and mobile terminal | |
CN109816679A (en) | A kind of image processing method and terminal device | |
CN109495638A (en) | A kind of information display method and terminal | |
CN109726303A (en) | A kind of image recommendation method and terminal | |
CN108763475A (en) | A kind of method for recording, record device and terminal device | |
CN108196781A (en) | The display methods and mobile terminal at interface | |
CN107728920A (en) | A kind of clone method and mobile terminal | |
CN110188252A (en) | A kind of searching method and terminal | |
CN109783722A (en) | A kind of content outputting method and terminal device | |
CN109743448A (en) | Reminding method and terminal device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190111 |
|
RJ01 | Rejection of invention patent application after publication |