WO2020015657A1 - Mobile terminal, and method and apparatus for pushing video - Google Patents

Mobile terminal, and method and apparatus for pushing video Download PDF

Info

Publication number
WO2020015657A1
WO2020015657A1 PCT/CN2019/096212 CN2019096212W WO2020015657A1 WO 2020015657 A1 WO2020015657 A1 WO 2020015657A1 CN 2019096212 W CN2019096212 W CN 2019096212W WO 2020015657 A1 WO2020015657 A1 WO 2020015657A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
mood
mood state
user
data
Prior art date
Application number
PCT/CN2019/096212
Other languages
French (fr)
Chinese (zh)
Inventor
高杰
Original Assignee
奇酷互联网络科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奇酷互联网络科技(深圳)有限公司 filed Critical 奇酷互联网络科技(深圳)有限公司
Publication of WO2020015657A1 publication Critical patent/WO2020015657A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Definitions

  • the present invention relates to the field of computers, and in particular, to a mobile terminal and a method and device for pushing video.
  • Existing mobile phones not only have the functions of making calls and making calls, but also have the functions of wallets, entertainment functions such as playing games and watching movies, etc.
  • the mobile phones have become increasingly inseparable daily necessities, such as users' good friends accompanying the users, but the existing mobile phones are still
  • the user's current mood state cannot be identified, and a suitable accompany method cannot be given according to the user's mood state. For example, when the user is in a bad mood, he cannot help the user to adjust his mood.
  • the accompany mode of the mobile phone is still not intelligent enough, and needs to be improved.
  • the main purpose of the present invention is to provide a method for pushing video, which aims to solve the problem that the existing mobile phone cannot identify the user's mood state and cannot help the user to adjust the mood.
  • the present invention proposes a method for pushing video, which is applied to a mobile terminal and includes steps:
  • the specified video is pushed to the user according to a preset rule to adjust the user's mood.
  • the step of analyzing the current mood state of the user according to the facial feature data includes:
  • the receiving server inputs the facial feature data into a mood state recognition model, and obtains mood analysis data, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  • step of determining whether the mood state is a preset unhappy mood state includes:
  • the mood state is a preset unhappy mood state.
  • the step of pushing a specified video to a user according to a preset rule includes:
  • a video with a preset amusement level is selected by the user.
  • the preset amusement level is related to the comedy index of the video.
  • the method includes:
  • the method includes:
  • the one-to-one correspondence between each recommended video is updated to the video level folder corresponding to each of the rating data according to the rating data.
  • This application also provides a device for pushing video, which is integrated in a mobile terminal and includes:
  • An obtaining module configured to obtain facial feature data when the screen of the mobile terminal is unlocked
  • An analysis module configured to analyze a user's current mood state according to the facial feature data
  • a judging module configured to judge whether the mood state is a preset unhappy mood state
  • a push module is configured to push a specified video to a user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
  • analysis module includes:
  • a sending unit configured to send the facial feature data to a server
  • a first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  • the first receiving unit is further configured to:
  • the judgment module is further configured to:
  • the mood state is a preset unhappy mood state.
  • the push module includes:
  • An obtaining unit configured to obtain a level corresponding to the current unhappy mood state of the user
  • the selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level is related to the comedy index of the video.
  • the push module further includes:
  • An acquisition unit which is used to collect real-time facial feature change data during a user watching a video and send the data to a server;
  • a second receiving unit configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server;
  • a judging unit configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data
  • the filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
  • the device for pushing video further includes:
  • a statistics module which is used to collect statistics on user ratings of each recommended video
  • An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
  • This application also provides a mobile terminal, including a processor and a memory,
  • the memory is configured to store a program for the device for pushing a video to execute any one of the methods for pushing a video;
  • the processor is configured to execute a program stored in the memory.
  • the identified facial feature data is brought into a mood state analysis model to analyze and obtain the current user's mood state.
  • a mood state analysis model to analyze and obtain the current user's mood state.
  • This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
  • FIG. 1 is a schematic flowchart of a method for pushing a video according to an embodiment of the present application
  • FIG. 2 is a schematic block diagram of a structure of a device for pushing video according to an embodiment of the present application
  • FIG. 3 is a schematic block diagram of a mobile terminal according to an embodiment of the present application.
  • an embodiment of the present application provides a method for pushing a video, which is applied to a mobile terminal and includes steps:
  • the mobile terminal in this embodiment includes, but is not limited to, a mobile phone.
  • the facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition.
  • the same data serves two functions, namely, for unlocking and For mood recognition, convenient and quick.
  • This embodiment obtains a user's current mood state through a mood recognition model based on face feature data.
  • the mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
  • the preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
  • a corresponding video may be pushed to the user to adjust the mood state of the user.
  • the preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
  • step S2 in this embodiment includes:
  • S20 Send the facial feature data to a server.
  • the training process of the mood recognition model in this embodiment is completed on the server side.
  • the training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone.
  • the mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training.
  • the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
  • S21 The mood analysis data obtained by the receiving server after inputting the facial feature data into a mood state recognition model, wherein the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state.
  • the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed.
  • the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user.
  • the unhappy mood state of this embodiment is a loss level
  • the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down.
  • the first classification vector in a vector Similar sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood
  • the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
  • step S4 in this embodiment includes:
  • the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
  • S41 Select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state.
  • the first classification vector, the second classification vector, and the third classification vector are output.
  • choosing to push the second-level amusement video can break the tears and laugh.
  • level amusement video it may not be able to adjust the mood, or the mood state cannot be adjusted quickly.
  • you choose to push a level three amusement video it will be over-adjusted, which will make people feel frivolous. State adjustments are best for a peaceful mood.
  • step S4 in this embodiment further includes:
  • S42 Collect real-time facial feature change data during the user watching the video and send it to the server.
  • the user after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
  • S43 Receive the analysis of the facial feature change data sent by the server in real time to obtain the mood state change data.
  • the mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data.
  • the value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
  • S44 Determine whether the pushed video is appropriate according to a change trend of the mood state change data.
  • the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
  • This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state.
  • Push higher rated videos The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
  • step S4 in another embodiment of the present application the method includes:
  • the push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
  • each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process.
  • the total score is 10 points
  • a rating of 0 to 5 corresponds to a negative review video
  • a rating of 6 to 8 corresponds to a positive review video
  • a rating of 9 to 10 corresponds to a highly rated video.
  • Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
  • the method for pushing video screens in the present application uses the identified facial feature data into a mood state analysis model in the process of face recognition and screen resolution, and analyzes to obtain the mood state of the current user. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video.
  • This application also classifies the videos that are pushed in order to push videos that are more in line with user needs, to quickly and effectively adjust the user ’s mood, and to make the way that mobile phones accompany users more intelligent.
  • an embodiment of the present application further provides a device for pushing video, which is integrated in a mobile terminal and includes:
  • the obtaining module 1 is configured to obtain facial feature data when the screen of the mobile terminal is unlocked.
  • the mobile terminal in this embodiment includes, but is not limited to, a mobile phone.
  • the facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition.
  • the same data serves two functions, that is, for unlocking, and For mood recognition, convenient and quick.
  • the analysis module 2 is configured to analyze a current mood state of the user according to the facial feature data.
  • This embodiment obtains a user's current mood state through a mood recognition model based on face feature data.
  • the mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
  • the determining module 3 is configured to determine whether the mood state is a preset unhappy mood state.
  • the preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
  • the pushing module 4 is configured to push a specified video to the user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
  • a corresponding video may be pushed to the user to adjust the mood state of the user.
  • the preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
  • analysis module 2 includes:
  • a sending unit configured to send the facial feature data to a server.
  • the training process of the mood recognition model in this embodiment is completed on the server side.
  • the training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone.
  • the mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training.
  • the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
  • the first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state.
  • the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed.
  • the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user.
  • the unhappy mood state of this embodiment is a loss level
  • the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down.
  • the first classification vector in a vector Similar sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood
  • the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
  • the push module 4 includes:
  • the obtaining unit is configured to obtain a level corresponding to the current unhappy mood state of the user.
  • the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
  • the selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state.
  • the first classification vector, the second classification vector, and the third classification vector are output.
  • choosing to push the second-level amusement video can break the tears and laugh.
  • level amusement video it may not be able to adjust the mood, or the mood state cannot be adjusted quickly.
  • you choose to push a level three amusement video it will be over-adjusted, which will make people feel frivolous.
  • the unhappy mood State adjustments are best for a peaceful mood.
  • the push module 4 further includes:
  • An acquisition unit is configured to collect real-time facial feature change data during a user watching a video and send the data to a server.
  • the user after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
  • the second receiving unit is configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server.
  • the mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data.
  • the value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
  • the judging unit is configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data.
  • the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
  • the filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
  • This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state.
  • Push higher rated videos The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
  • the device for pushing video further includes:
  • the statistics module is used to collect statistics on user ratings of each recommended video.
  • the push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
  • An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
  • each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process.
  • the total score is 10 points
  • a rating of 0 to 5 corresponds to a negative review video
  • a rating of 6 to 8 corresponds to a positive review video
  • a rating of 9 to 10 corresponds to a highly rated video.
  • Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
  • the device for pushing a video screen of the present application brings the identified facial feature data into a mood state analysis model during the process of face recognition and screen resolution, and analyzes to obtain the current user's mood state. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video.
  • This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
  • an embodiment of the present invention further provides a mobile terminal including a processor 1080 and a memory 1020, where the memory 1020 is configured to store a program for a device that pushes a video to execute the foregoing method for pushing a video; the processor 1080 is And configured to execute a program stored in the memory.
  • the mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), and a vehicle-mounted computer.
  • a mobile phone a mobile phone
  • PDA Personal Digital Assistant
  • POS Point of Sales
  • FIG. 3 is a block diagram showing a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 1010, a memory 1020, an input unit 1030, a display unit 1040, a sensor 1050, an audio circuit 1060, a wireless fidelity (WiFi) module 1070, and a processor 1080 , And power supply 1090 and other components.
  • RF radio frequency
  • the RF circuit 1010 can be used for receiving and transmitting signals during information transmission and reception or during a call.
  • the downlink information of the base station is received and processed by the processor 1080; in addition, the uplink data of the design is transmitted to the base station.
  • the RF circuit 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • the RF circuit 1010 can also communicate with a network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 1020 may be used to store software programs and modules.
  • the processor 1080 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1020.
  • the memory 1020 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may store data according to Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 1020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 1030 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the mobile phone.
  • the input unit 1030 may include a touch panel 1031 and other input devices 1032.
  • Touch panel 1031 also known as touch screen, can collect user's touch operations on or near it (such as the user using a finger, stylus, etc. any suitable object or accessory on touch panel 1031 or near touch panel 1031 Operation), and drive the corresponding connection device according to a preset program.
  • the touch panel 1031 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it To the processor 1080, and can receive the commands sent by the processor 1080 and execute them.
  • various types such as resistive, capacitive, infrared, and surface acoustic wave can be used to implement the touch panel 1031.
  • the input unit 1030 may include other input devices 1032.
  • other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, an operation lever, and the like.
  • the display unit 1040 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 1040 may include a display panel 1041, and optionally, the display panel 1041 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1031 may cover the display panel 1041. When the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch panel 1031 transmits the touch operation to the processor 1080 to determine the type of the touch event. The type provides corresponding visual output on the display panel 1041.
  • the touch panel 1031 and the display panel 1041 are implemented as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 can be integrated and Realize the input and output functions of the mobile phone.
  • the mobile phone may further include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light.
  • the proximity sensor may close the display panel 1041 and / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary.
  • the mobile phone can be used for applications that recognize the attitude of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc .; as for the mobile phone can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, no longer here To repeat.
  • attitude of mobile phones such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tap
  • the mobile phone can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, no longer here To repeat.
  • the audio circuit 1060, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 1060 can transmit the received electrical data converted electrical signal to the speaker 1061, and the speaker 1061 converts the sound signal to an audio signal output.
  • the microphone 1062 converts the collected sound signal into an electrical signal, and the audio circuit 1060 After receiving, it is converted into audio data, and then the audio data is output to the processor 1080 for processing, and then sent to, for example, another mobile phone via the RF circuit 1010, or the audio data is output to the memory 1020 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1070. It provides users with wireless broadband Internet access.
  • FIG. 3 shows the WiFi module 1070, it can be understood that it does not belong to the necessary configuration of the mobile phone, and can be omitted as needed without changing the essence of the invention.
  • the processor 1080 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone.
  • the processor 1080 runs or executes software programs and / or modules stored in the memory 1020, and calls data stored in the memory 1020 to execute.
  • Various functions and processing data of the mobile phone so as to monitor the mobile phone as a whole.
  • the processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc.
  • the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1080.
  • the mobile phone also includes a power supply 1090 (such as a battery) for supplying power to various components.
  • a power supply 1090 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 1080 through a power management system, so as to implement functions such as management of charging, discharging, and power consumption management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 1080 included in the mobile terminal also has the following functions:
  • the specified video is pushed to the user according to a preset rule to adjust the user's mood.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the program may be stored in a computer-readable storage medium.
  • the medium may be a read-only memory, a magnetic disk, or an optical disk.

Abstract

Disclosed are a mobile terminal, and a method and apparatus for pushing a video, wherein the method and apparatus are applied to the mobile terminal, and the method comprises the following steps: acquiring facial feature data when the screen of a mobile terminal is unlocked; according to the facial feature data, analyzing the current mood of a user; determining whether the mood state is a pre-set unhappy mood; and if so, pushing a specific video to the user according to a pre-set rule, so as to adjust the mood of the user. According to the present application, in the process of unlocking a screen through facial recognition, the recognized facial feature data is brought into a mood analysis model for analysis to obtain the mood of the current user; and when it is determined that the current mood of the user is an unhappy state, a corresponding video is pushed to help the user adjust their mood. In the present application, the grading of videos to be pushed is also carried out, so that a video that better conforms to requirements of the user is pushed, so as to achieve the purpose of rapidly and effectively adjusting the mood of the user, and enabling a mobile phone to more intelligently accompany the user.

Description

移动终端和推送视频的方法、装置Mobile terminal and method and device for pushing video 【技术领域】[Technical Field]
本发明涉及到计算机领域,特别是涉及到一种移动终端和推送视频的方法、装置。The present invention relates to the field of computers, and in particular, to a mobile terminal and a method and device for pushing video.
【背景技术】【Background technique】
现有手机不仅具有打电话通话功能,而且具有钱包功能,玩游戏、观看电影等娱乐功能等,手机对于越来越成为形影不离的日用品,像是用户好朋友相伴于用户左右,但现有手机依然不能识别用户当前的心情状态,不能根据用户的心情状态给出合适的陪伴方式,比如用户心情不好时,无法帮助用户调节心情,手机的陪伴方式依然不够智能化,还有待提高。Existing mobile phones not only have the functions of making calls and making calls, but also have the functions of wallets, entertainment functions such as playing games and watching movies, etc. The mobile phones have become increasingly inseparable daily necessities, such as users' good friends accompanying the users, but the existing mobile phones are still The user's current mood state cannot be identified, and a suitable accompany method cannot be given according to the user's mood state. For example, when the user is in a bad mood, he cannot help the user to adjust his mood. The accompany mode of the mobile phone is still not intelligent enough, and needs to be improved.
【发明内容】[Summary of the Invention]
本发明的主要目的为提供一种推送视频的方法,旨在解决现有手机无法识别用户心情状态,无法帮用户调节心情的问题。The main purpose of the present invention is to provide a method for pushing video, which aims to solve the problem that the existing mobile phone cannot identify the user's mood state and cannot help the user to adjust the mood.
为了实现上述发明目的,本发明提出一种推送视频的方法,应用于移动终端,包括步骤:In order to achieve the above-mentioned object of the present invention, the present invention proposes a method for pushing video, which is applied to a mobile terminal and includes steps:
获取解锁所述移动终端屏幕时的人脸特征数据;Acquiring facial feature data when the screen of the mobile terminal is unlocked;
根据所述人脸特征数据分析用户当前的心情状态;Analyze the current mood state of the user according to the facial feature data;
判断所述心情状态是否为预设的不开心心情状态;Determining whether the mood state is a preset unhappy mood state;
若是,则按照预设规则向用户推送指定的视频,以调节用户心情。If so, the specified video is pushed to the user according to a preset rule to adjust the user's mood.
进一步地,所述根据所述人脸特征数据分析用户当前的心情状态的步骤,包括:Further, the step of analyzing the current mood state of the user according to the facial feature data includes:
将所述人脸特征数据发送至服务器;Sending the facial feature data to a server;
接收服务器将所述人脸特征数据输入到心情状态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。The receiving server inputs the facial feature data into a mood state recognition model, and obtains mood analysis data, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
进一步地,所述心情状态识别模型是通过将采集的表情数据作为训练样本输入到训练模型中获得。Further, the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
进一步地,所述判断所述心情状态是否为预设的不开心心情状态的步骤,包括:Further, the step of determining whether the mood state is a preset unhappy mood state includes:
将所述预设的不开心心情状态输入所述心情识别模型,获得第一向量;Inputting the preset unhappy mood state into the mood recognition model to obtain a first vector;
将所述人脸特征数据输入到心情状态识别模型,获得第二向量;Inputting the facial feature data into a mood state recognition model to obtain a second vector;
将所述第二向量和所述第一向量进行欧式距离计算;Performing Euclidean distance calculation on the second vector and the first vector;
若所述欧式距离小于预设值,则判定所述心情状态为预设的不开心心情状态。If the Euclidean distance is less than a preset value, it is determined that the mood state is a preset unhappy mood state.
进一步地,所述按照预设规则向用户推送指定的视频的步骤,包括:Further, the step of pushing a specified video to a user according to a preset rule includes:
获取用户当前的不开心心情状态所对应的等级;Obtain the level corresponding to the current unhappy mood state of the user;
根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。According to the level corresponding to the unhappy mood state, a video with a preset amusement level is selected by the user.
进一步地,所述预设逗乐等级与视频的喜剧指数相关。Further, the preset amusement level is related to the comedy index of the video.
进一步地,所述根据所述不开心心情状态所对应的等级,选择对应逗乐等级的视频的步骤之后,包括:Further, after the step of selecting a video corresponding to the amusement level according to the level corresponding to the unhappy mood state, the method includes:
实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器;Collect real-time facial feature change data during user watching video and send it to the server;
实时接收服务器发送的分析所述人脸特征变化数据,得到的心情状态变化数据;Receive in real time the analysis of the facial feature change data sent by the server to obtain the mood state change data;
根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适;Judging whether the pushed video is appropriate according to the change trend of the mood state change data;
若不合适,则重新筛选服务器推送的评分较高的视频。If not, re-screen the higher-rated videos pushed by the server.
进一步地,所述按照预设规则向用户推送指定的视频的步骤之后,包括:Further, after the step of pushing the specified video to the user according to a preset rule, the method includes:
统计用户对各推荐视频的评分数据;Count the user's rating data for each recommended video;
根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。The one-to-one correspondence between each recommended video is updated to the video level folder corresponding to each of the rating data according to the rating data.
本申请还提供了一种推送视频的装置,集成于移动终端内,包括:This application also provides a device for pushing video, which is integrated in a mobile terminal and includes:
获取模块,用于获取解锁所述移动终端屏幕时的人脸特征数据;An obtaining module, configured to obtain facial feature data when the screen of the mobile terminal is unlocked;
分析模块,用于根据所述人脸特征数据分析用户当前的心情状态;An analysis module, configured to analyze a user's current mood state according to the facial feature data;
判断模块,用于判断所述心情状态是否为预设的不开心心情状态;A judging module, configured to judge whether the mood state is a preset unhappy mood state;
推送模块,用于若是预设的不开心心情状态,则按照预设规则向用户推送指定的视频,以调节用户心情。A push module is configured to push a specified video to a user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
进一步地,所述分析模块,包括:Further, the analysis module includes:
发送单元,用于将所述人脸特征数据发送至服务器;A sending unit, configured to send the facial feature data to a server;
第一接收单元,用于接收服务器将所述人脸特征数据输入到心情状 态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。A first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
进一步地,所述心情状态识别模型是通过将采集的表情数据作为训练样本输入到训练模型中获得。Further, the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
进一步地,所述第一接收单元还用于:Further, the first receiving unit is further configured to:
接收将所述预设的不开心心情状态输入所述心情识别模型后得到的第一向量;Receiving a first vector obtained by inputting the preset unhappy mood state into the mood recognition model;
接收将所述人脸特征数据输入到心情状态识别模型后得到的第二向量;Receiving a second vector obtained by inputting the facial feature data into a mood state recognition model;
所述判断模块还用于:The judgment module is further configured to:
将所述第二向量和所述第一向量进行欧式距离计算;Performing Euclidean distance calculation on the second vector and the first vector;
若所述欧式距离小于预设值,则判定所述心情状态为预设的不开心心情状态。If the Euclidean distance is less than a preset value, it is determined that the mood state is a preset unhappy mood state.
进一步地,所述推送模块,包括:Further, the push module includes:
获取单元,用于获取用户当前的不开心心情状态所对应的等级;An obtaining unit, configured to obtain a level corresponding to the current unhappy mood state of the user;
选择单元,用于根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。The selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
进一步地,所述预设逗乐等级与视频的喜剧指数相关。Further, the preset amusement level is related to the comedy index of the video.
进一步地,所述推送模块,还包括:Further, the push module further includes:
采集单元,用于实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器;An acquisition unit, which is used to collect real-time facial feature change data during a user watching a video and send the data to a server;
第二接收单元,用于实时接收服务器发送的分析所述人脸特征变化数据,得到的心情状态变化数据;A second receiving unit, configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server;
判断单元,用于根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适;A judging unit, configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data;
筛选单元,用于若推送的视频不合适,则重新筛选服务器推送的评分较高的视频。The filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
进一步地,所述推送视频的装置,还包括:Further, the device for pushing video further includes:
统计模块,用于统计用户对各推荐视频的评分数据;A statistics module, which is used to collect statistics on user ratings of each recommended video;
更新模块,用于根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
本申请还提供一种移动终端,包括处理器和存储器,This application also provides a mobile terminal, including a processor and a memory,
所述存储器用于存储推送视频的装置执行上述任一项的推送视频 的方法的程序;The memory is configured to store a program for the device for pushing a video to execute any one of the methods for pushing a video;
所述处理器被配置为用于执行所述存储器中存储的程序。The processor is configured to execute a program stored in the memory.
本申请通过在人脸识别解屏的过程中,将识别到的人脸特征数据,带入到心情状态分析模型中,分析获得当前用户的心情状态,当判断用于的当前心情为不开心状态时,则通过推送相应的视频,帮用户调节心情。本申请同时对推送的视频进行等级划分,以便推送到更符合用户需求的视频,达到快速有效的调整用户心情的目的,使手机陪伴用户的方式更加智能化。In this application, during the process of face recognition and descreening, the identified facial feature data is brought into a mood state analysis model to analyze and obtain the current user's mood state. When it is determined that the current mood used is unhappy state Time, then push the corresponding video to help users adjust their mood. This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
【附图说明】[Brief Description of the Drawings]
图1为本申请一实施例的推送视频的方法的流程示意图;1 is a schematic flowchart of a method for pushing a video according to an embodiment of the present application;
图2为本申请一实施例的推送视频的装置的结构示意框图;FIG. 2 is a schematic block diagram of a structure of a device for pushing video according to an embodiment of the present application; FIG.
图3为本申请一实施例的移动终端的结构示意框图。FIG. 3 is a schematic block diagram of a mobile terminal according to an embodiment of the present application.
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。The realization of the purpose, functional characteristics and advantages of the present invention will be further explained with reference to the embodiments and the drawings.
【具体实施方式】【detailed description】
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。It should be understood that the specific embodiments described herein are only used to explain the present invention and are not intended to limit the present invention.
参照图1,本申请实施例提供一种推送视频的方法,应用于移动终端,包括步骤:Referring to FIG. 1, an embodiment of the present application provides a method for pushing a video, which is applied to a mobile terminal and includes steps:
S1:获取解锁所述移动终端屏幕时的人脸特征数据。S1: Obtain facial feature data when the screen of the mobile terminal is unlocked.
本实施例的移动终端包括但不限于手机,本实施例的人脸特征数据为通过人脸识别进行解锁时,获得的人脸特征数据,同一数据起到两种功用,即用于解锁,又用于心情识别,方便快捷。The mobile terminal in this embodiment includes, but is not limited to, a mobile phone. The facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition. The same data serves two functions, namely, for unlocking and For mood recognition, convenient and quick.
S2:根据所述人脸特征数据分析用户当前的心情状态。S2: Analyze the current mood state of the user according to the facial feature data.
本实施例基于人脸特征数据通过心情识别模型获得用户当前的心情状态。本实施例的心情识别模型通过采集大量人物的心情脸部表情数据,并将上述采集的表情数据作为训练样本,输入到CNN等训练模型中获得。This embodiment obtains a user's current mood state through a mood recognition model based on face feature data. The mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
S3:判断所述心情状态是否为预设的不开心心情状态。S3: Determine whether the mood state is a preset unhappy mood state.
本实施例的预设的不开心心情状态,由心情识别模型输出的相应第一向量表示。用户当前的心情状态通过心情识别模型后,输出第二向量,将第二向量与第一向量进行欧式距离计算,若欧式距离小于预设值,表明第二向量与第一向量相近,则判定用户当前的心情状态为不开心状态,否则为开心状态。The preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
S4:若是,则按照预设规则向用户推送指定的视频,以调节用户心情。S4: If yes, push the specified video to the user according to a preset rule to adjust the user's mood.
本实施例通过分析到用户当前的心情状态为不开心状态后,可向用户推送相应的视频以调节用户的心情状态。本实施例的预设规则为能较好调节用户心情状态的推送方式或选择视频的方式,包括根据用户的喜好推送视频,或根据大数据库中大量用户的好评数据,推送评分高的,或根据不同心情状态等级推送预设等级的视频,或则与其他推送方式综合使用,进行推送。In this embodiment, after analyzing that the current mood state of the user is an unhappy state, a corresponding video may be pushed to the user to adjust the mood state of the user. The preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
进一步地,本实施例的步骤S2,包括:Further, step S2 in this embodiment includes:
S20:将所述人脸特征数据发送至服务器。S20: Send the facial feature data to a server.
本实施例的心情识别模型的训练过程在服务器端完成,心情识别模型的训练过程需要大量的数据以及计算过程,在服务器端完成以降低手机的运行负荷。本实施例的心情识别过程也在服务器端完成,以保证训练后的心情识别模型的一致性,本实施例通过将获取的人脸特征数据发送至服务器,然后通过服务器输入到心情识别模型中,以完成用户的心情状态识别分析。The training process of the mood recognition model in this embodiment is completed on the server side. The training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone. The mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training. In this embodiment, the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
S21:接收服务器将所述人脸特征数据输入到心情状态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。S21: The mood analysis data obtained by the receiving server after inputting the facial feature data into a mood state recognition model, wherein the mood analysis data includes a mood state and a preset level corresponding to the mood state.
本实施例的心情识别模型输出的数据中,携带心情状态对应的预设等级。比如不开心心情状态包括失落、比较难过、非常痛苦或悲痛欲绝等预设等级,本实施例通过将不开心心情状态进行细化等级以便更精准地推送适合用户当前的心情状态的视频。举例地,本实施例的不开心心情状态为失落等级时,人脸的表情表现为眼神忧郁,嘴角下拉,将带有上述失落等级代表特征的人脸特征数据输入到心情识别模型后,输出第 一向量中的第一分类向量;同理比较难过的人脸特征数据输入到心情识别模型后,输出第一向量中的第二分类向量;非常痛苦或悲痛欲绝的人脸特征数据输入到心情识别模型后,输出第一向量中的第三分类向量;预设不开心心情状态对应的第一向量被细分成几个小分区,以便更精准地选择相应的视频推送。The data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state. For example, the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed. In this embodiment, the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user. For example, when the unhappy mood state of this embodiment is a loss level, the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down. After inputting the facial feature data with the representative characteristics of the loss level into the mood recognition model, the first The first classification vector in a vector; similarly sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood After identifying the model, the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
进一步地,本实施例的步骤S4,包括:Further, step S4 in this embodiment includes:
S40:获取用户当前的不开心心情状态所对应的等级。S40: Acquire the level corresponding to the current unhappy mood state of the user.
本实施例通过将用户当前的不开心心情状态对应的人脸特征数据,输入到心情识别模型后得到的向量值,与预设不开心心情状态对应的第一向量被细分成的几个小分区中的哪个分区更接近,则说明对应哪个分区对应的预设等级。In this embodiment, by inputting facial feature data corresponding to a user's current unhappy mood state into a mood recognition model, the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
S41:根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。S41: Select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
本实施例的预设逗乐等级的视频,预设逗乐等级与视频的喜剧指数相关,包括一级逗乐视频、二级逗乐视频和三级逗乐视频,分别对应不开心心情状态对应的第一向量的第一分类向量、第二分类向量和第三分类向量。比如,当前人脸表情为眼中含泪的欲哭状态,对应比较难过,输出第一向量中的第二分类向量,此时选择推送二级逗乐视频能起到破涕为笑的效果,当推选一级逗乐视频时,可能起不到调节心情的效果,或不能快速调节心情状态,而选择推送三级逗乐视频时,会调节过度,易让人产生轻浮的感觉,本实施例以将不开心心情状态调整为心情平和为最佳。The preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state. The first classification vector, the second classification vector, and the third classification vector. For example, the current facial expression is a tearful state with tears in the eyes, which is relatively sad. The second classification vector in the first vector is output. At this time, choosing to push the second-level amusement video can break the tears and laugh. When one is selected When level amusement video, it may not be able to adjust the mood, or the mood state cannot be adjusted quickly. However, when you choose to push a level three amusement video, it will be over-adjusted, which will make people feel frivolous. State adjustments are best for a peaceful mood.
进一步地,本实施例的步骤S4,还包括:Further, step S4 in this embodiment further includes:
S42:实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器。S42: Collect real-time facial feature change data during the user watching the video and send it to the server.
本实施例在推送视频后,用户观看视频的过程中会实时采集人脸特征变化数据,比如,通过每隔固定时间段采集人脸特征数据,形成用户观看视频的过程中人脸特征变化数据,然后将人脸特征变化数据依次输入至心情识别模型进行分析。In this embodiment, after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
S43:实时接收服务器发送的分析所述人脸特征变化数据,得到的 心情状态变化数据。S43: Receive the analysis of the facial feature change data sent by the server in real time to obtain the mood state change data.
本实施例的心情状态变化数据可根据人脸特征变化数据对应的输出向量值形成。输出向量值的数值大小起伏变化趋势,构成心情状态变化数据的起伏变化趋势。The mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data. The value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
S44:根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适。S44: Determine whether the pushed video is appropriate according to a change trend of the mood state change data.
本实施例以心情状态变化数据的变化趋势,以平缓变化趋势中心情状态变好为最佳,并以此作为判断推送的视频是否合适,满足上述的平缓变化趋势中心情状态变好则合适,否则不合适。In this embodiment, the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
S45:若不合适,则重新筛选服务器推送的评分较高的视频。S45: If it is not suitable, re-screen the videos with higher ratings pushed by the server.
本实施例的不合适,指推送视频未起到调节心情状态变好的效果。则推送评分较高的视频。本实施例的评分较高的视频包括用户历史数据中评分较高的视频,也包括大数据库中评分较高的视频。This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state. Push higher rated videos. The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
进一步地,本申请另一实施例的步骤S4之后,包括:Further, after step S4 in another embodiment of the present application, the method includes:
S5:统计用户对各推荐视频的评分数据。S5: Count the user's rating data for each recommended video.
本实施例的推送视频,在用户观看后会给予相应的打分才可退出,以便收集并统计用户对各推荐视频的评分数据,以便根据上述评分数据形成用户自己的数据库,以便根由针对性地推送视频。The push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
S6:根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。S6: Update each of the recommended videos to the video level folder corresponding to each of the rating data according to the rating data.
本实施例根据用户对视频的不同评分,将各视频对应存储于不同的文件夹内,以便根据文件夹实现在存储的过程中对用户的历史浏览视频进行等级划分。举例地,总分为10分,评分0至5对应差评视频,评分6至8对应好评视频,评分9至10对应特好评视频,以便向用户推送视频时,优选从好评视频和特好评视频中进行优先推送。本实施例的好评视频和特好评视频中均会包括不同逗乐等级的视频。In this embodiment, each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process. For example, the total score is 10 points, a rating of 0 to 5 corresponds to a negative review video, a rating of 6 to 8 corresponds to a positive review video, and a rating of 9 to 10 corresponds to a highly rated video. In order to push the video to the user, it is preferable to select from the positive video and the highly rated video Priority push. Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
本申请的推送视屏的方法,通过在人脸识别解屏的过程中,将识别到的人脸特征数据,带入到心情状态分析模型中,分析获得当前用户的心情状态,当判断用于的当前心情为不开心状态时,则通过推送相应的视频,帮用户调节心情。本申请同时对推送的视频进行等级划分,以便 推送到更符合用户需求的视频,达到快速有效的调整用户心情的目的,使手机陪伴用户的方式更加智能化。The method for pushing video screens in the present application uses the identified facial feature data into a mood state analysis model in the process of face recognition and screen resolution, and analyzes to obtain the mood state of the current user. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video. This application also classifies the videos that are pushed in order to push videos that are more in line with user needs, to quickly and effectively adjust the user ’s mood, and to make the way that mobile phones accompany users more intelligent.
参照图2,本申请实施例还提供一种推送视频的装置,集成于移动终端内,包括:Referring to FIG. 2, an embodiment of the present application further provides a device for pushing video, which is integrated in a mobile terminal and includes:
获取模块1,用于获取解锁所述移动终端屏幕时的人脸特征数据。The obtaining module 1 is configured to obtain facial feature data when the screen of the mobile terminal is unlocked.
本实施例的移动终端包括但不限于手机,本实施例的人脸特征数据为通过人脸识别进行解锁时,获得的人脸特征数据,同一数据起到两种功用,即用于解锁,又用于心情识别,方便快捷。The mobile terminal in this embodiment includes, but is not limited to, a mobile phone. The facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition. The same data serves two functions, that is, for unlocking, and For mood recognition, convenient and quick.
分析模块2,用于根据所述人脸特征数据分析用户当前的心情状态。The analysis module 2 is configured to analyze a current mood state of the user according to the facial feature data.
本实施例基于人脸特征数据通过心情识别模型获得用户当前的心情状态。本实施例的心情识别模型通过采集大量人物的心情脸部表情数据,并将上述采集的表情数据作为训练样本,输入到CNN等训练模型中获得。This embodiment obtains a user's current mood state through a mood recognition model based on face feature data. The mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
判断模块3,用于判断所述心情状态是否为预设的不开心心情状态。The determining module 3 is configured to determine whether the mood state is a preset unhappy mood state.
本实施例的预设的不开心心情状态,由心情识别模型输出的相应第一向量表示。用户当前的心情状态通过心情识别模型后,输出第二向量,将第二向量与第一向量进行欧式距离计算,若欧式距离小于预设值,表明第二向量与第一向量相近,则判定用户当前的心情状态为不开心状态,否则为开心状态。The preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
推送模块4,用于若是预设的不开心心情状态,则按照预设规则向用户推送指定的视频,以调节用户心情。The pushing module 4 is configured to push a specified video to the user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
本实施例通过分析到用户当前的心情状态为不开心状态后,可向用户推送相应的视频以调节用户的心情状态。本实施例的预设规则为能较好调节用户心情状态的推送方式或选择视频的方式,包括根据用户的喜好推送视频,或根据大数据库中大量用户的好评数据,推送评分高的,或根据不同心情状态等级推送预设等级的视频,或则与其他推送方式综合使用,进行推送。In this embodiment, after analyzing that the current mood state of the user is an unhappy state, a corresponding video may be pushed to the user to adjust the mood state of the user. The preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
进一步地,所述分析模块2,包括:Further, the analysis module 2 includes:
发送单元,用于将所述人脸特征数据发送至服务器。A sending unit, configured to send the facial feature data to a server.
本实施例的心情识别模型的训练过程在服务器端完成,心情识别模型的训练过程需要大量的数据以及计算过程,在服务器端完成以降低手机的运行负荷。本实施例的心情识别过程也在服务器端完成,以保证训练后的心情识别模型的一致性,本实施例通过将获取的人脸特征数据发送至服务器,然后通过服务器输入到心情识别模型中,以完成用户的心情状态识别分析。The training process of the mood recognition model in this embodiment is completed on the server side. The training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone. The mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training. In this embodiment, the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
第一接收单元,用于接收服务器将所述人脸特征数据输入到心情状态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。The first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
本实施例的心情识别模型输出的数据中,携带心情状态对应的预设等级。比如不开心心情状态包括失落、比较难过、非常痛苦或悲痛欲绝等预设等级,本实施例通过将不开心心情状态进行细化等级以便更精准地推送适合用户当前的心情状态的视频。举例地,本实施例的不开心心情状态为失落等级时,人脸的表情表现为眼神忧郁,嘴角下拉,将带有上述失落等级代表特征的人脸特征数据输入到心情识别模型后,输出第一向量中的第一分类向量;同理比较难过的人脸特征数据输入到心情识别模型后,输出第一向量中的第二分类向量;非常痛苦或悲痛欲绝的人脸特征数据输入到心情识别模型后,输出第一向量中的第三分类向量;预设不开心心情状态对应的第一向量被细分成几个小分区,以便更精准地选择相应的视频推送。The data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state. For example, the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed. In this embodiment, the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user. For example, when the unhappy mood state of this embodiment is a loss level, the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down. After inputting the facial feature data with the representative characteristics of the loss level into the mood recognition model, the first The first classification vector in a vector; similarly sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood After identifying the model, the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
进一步地,所述推送模块4,包括:Further, the push module 4 includes:
获取单元,用于获取用户当前的不开心心情状态所对应的等级。The obtaining unit is configured to obtain a level corresponding to the current unhappy mood state of the user.
本实施例通过将用户当前的不开心心情状态对应的人脸特征数据,输入到心情识别模型后得到的向量值,与预设不开心心情状态对应的第一向量被细分成的几个小分区中的哪个分区更接近,则说明对应哪个分区对应的预设等级。In this embodiment, by inputting facial feature data corresponding to a user's current unhappy mood state into a mood recognition model, the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
选择单元,用于根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。The selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
本实施例的预设逗乐等级的视频,预设逗乐等级与视频的喜剧指数相关,包括一级逗乐视频、二级逗乐视频和三级逗乐视频,分别对应不 开心心情状态对应的第一向量的第一分类向量、第二分类向量和第三分类向量。比如,当前人脸表情为眼中含泪的欲哭状态,对应比较难过,输出第一向量中的第二分类向量,此时选择推送二级逗乐视频能起到破涕为笑的效果,当推选一级逗乐视频时,可能起不到调节心情的效果,或不能快速调节心情状态,而选择推送三级逗乐视频时,会调节过度,易让人产生轻浮的感觉,本实施例以将不开心心情状态调整为心情平和为最佳。The preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state. The first classification vector, the second classification vector, and the third classification vector. For example, the current facial expression is a tearful state with tears in the eyes, which is relatively sad. The second classification vector in the first vector is output. At this time, choosing to push the second-level amusement video can break the tears and laugh. When one is selected, When level amusement video, it may not be able to adjust the mood, or the mood state cannot be adjusted quickly. However, when you choose to push a level three amusement video, it will be over-adjusted, which will make people feel frivolous. In this embodiment, the unhappy mood State adjustments are best for a peaceful mood.
进一步地,所述推送模块4,还包括:Further, the push module 4 further includes:
采集单元,用于实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器。An acquisition unit is configured to collect real-time facial feature change data during a user watching a video and send the data to a server.
本实施例在推送视频后,用户观看视频的过程中会实时采集人脸特征变化数据,比如,通过每隔固定时间段采集人脸特征数据,形成用户观看视频的过程中人脸特征变化数据,然后将人脸特征变化数据依次输入至心情识别模型进行分析。In this embodiment, after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
第二接收单元,用于实时接收服务器发送的分析所述人脸特征变化数据,得到的心情状态变化数据。The second receiving unit is configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server.
本实施例的心情状态变化数据可根据人脸特征变化数据对应的输出向量值形成。输出向量值的数值大小起伏变化趋势,构成心情状态变化数据的起伏变化趋势。The mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data. The value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
判断单元,用于根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适。The judging unit is configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data.
本实施例以心情状态变化数据的变化趋势,以平缓变化趋势中心情状态变好为最佳,并以此作为判断推送的视频是否合适,满足上述的平缓变化趋势中心情状态变好则合适,否则不合适。In this embodiment, the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
筛选单元,用于若推送的视频不合适,则重新筛选服务器推送的评分较高的视频。The filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
本实施例的不合适,指推送视频未起到调节心情状态变好的效果。则推送评分较高的视频。本实施例的评分较高的视频包括用户历史数据中评分较高的视频,也包括大数据库中评分较高的视频。This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state. Push higher rated videos. The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
进一步地,所述推送视频的装置,还包括:Further, the device for pushing video further includes:
统计模块,用于统计用户对各推荐视频的评分数据。The statistics module is used to collect statistics on user ratings of each recommended video.
本实施例的推送视频,在用户观看后会给予相应的打分才可退出,以便收集并统计用户对各推荐视频的评分数据,以便根据上述评分数据形成用户自己的数据库,以便根由针对性地推送视频。The push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
更新模块,用于根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
本实施例根据用户对视频的不同评分,将各视频对应存储于不同的文件夹内,以便根据文件夹实现在存储的过程中对用户的历史浏览视频进行等级划分。举例地,总分为10分,评分0至5对应差评视频,评分6至8对应好评视频,评分9至10对应特好评视频,以便向用户推送视频时,优选从好评视频和特好评视频中进行优先推送。本实施例的好评视频和特好评视频中均会包括不同逗乐等级的视频。In this embodiment, each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process. For example, the total score is 10 points, a rating of 0 to 5 corresponds to a negative review video, a rating of 6 to 8 corresponds to a positive review video, and a rating of 9 to 10 corresponds to a highly rated video. In order to push the video to the user, it is preferable to select from the positive video and the highly rated video Priority push. Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
本申请的推送视屏的装置,通过在人脸识别解屏的过程中,将识别到的人脸特征数据,带入到心情状态分析模型中,分析获得当前用户的心情状态,当判断用于的当前心情为不开心状态时,则通过推送相应的视频,帮用户调节心情。本申请同时对推送的视频进行等级划分,以便推送到更符合用户需求的视频,达到快速有效的调整用户心情的目的,使手机陪伴用户的方式更加智能化。The device for pushing a video screen of the present application brings the identified facial feature data into a mood state analysis model during the process of face recognition and screen resolution, and analyzes to obtain the current user's mood state. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video. This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
参照图3,本发明实施例还提供一种移动终端,包括处理器1080和存储器1020,所述存储器1020用于存储推送视频的装置执行上述的推送视频的方法的程序;所述处理器1080被配置为用于执行所述存储器中存储的程序。Referring to FIG. 3, an embodiment of the present invention further provides a mobile terminal including a processor 1080 and a memory 1020, where the memory 1020 is configured to store a program for a device that pushes a video to execute the foregoing method for pushing a video; the processor 1080 is And configured to execute a program stored in the memory.
为了便于说明,仅示出了与本发明实施例相关的部分,具体技术细节未揭示的,请参照本发明实施例方法部分。该移动终端可以为包括手机、平板电脑、PDA(Personal Digital Assistant,个人数字助理)、POS(Point of Sales,销售终端)、车载电脑等任意终端设备,以移动终端为手机为例:For ease of description, only the parts related to the embodiment of the present invention are shown, and specific technical details are not disclosed, please refer to the method part of the embodiment of the present invention. The mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), and a vehicle-mounted computer. Taking the mobile terminal as a mobile phone as an example:
图3示出的是与本发明实施例提供的移动终端相关的手机的部分结构的框图。参考图3,手机包括:射频(Radio Frequency,RF)电路1010、 存储器1020、输入单元1030、显示单元1040、传感器1050、音频电路1060、无线保真(wireless fidelity,WiFi)模块1070、处理器1080、以及电源1090等部件。本领域技术人员可以理解,图3中示出的手机结构并不构成对手机的限定,可以包括比图示更多或更少的部件,或者组合某些部件,或者不同的部件布置。FIG. 3 is a block diagram showing a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention. Referring to FIG. 3, the mobile phone includes: a radio frequency (RF) circuit 1010, a memory 1020, an input unit 1030, a display unit 1040, a sensor 1050, an audio circuit 1060, a wireless fidelity (WiFi) module 1070, and a processor 1080 , And power supply 1090 and other components. Those skilled in the art can understand that the structure of the mobile phone shown in FIG. 3 does not constitute a limitation on the mobile phone, and may include more or fewer parts than those shown in the figure, or combine certain parts, or arrange different parts.
下面结合图3对手机的各个构成部件进行具体的介绍:The following describes each component of the mobile phone in detail with reference to FIG. 3:
RF电路1010可用于收发信息或通话过程中,信号的接收和发送,特别地,将基站的下行信息接收后,给处理器1080处理;另外,将设计上行的数据发送给基站。通常,RF电路1010包括但不限于天线、至少一个放大器、收发信机、耦合器、低噪声放大器(Low Noise Amplifier,LNA)、双工器等。此外,RF电路1010还可以通过无线通信与网络和其他设备通信。上述无线通信可以使用任一通信标准或协议,包括但不限于全球移动通讯系统(Global System of Mobile communication,GSM)、通用分组无线服务(General Packet Radio Service,GPRS)、码分多址(Code Division Multiple Access,CDMA)、宽带码分多址(Wideband Code Division Multiple Access,WCDMA)、长期演进(Long Term Evolution,LTE)、电子邮件、短消息服务(Short Messaging Service,SMS)等。The RF circuit 1010 can be used for receiving and transmitting signals during information transmission and reception or during a call. In particular, the downlink information of the base station is received and processed by the processor 1080; in addition, the uplink data of the design is transmitted to the base station. Generally, the RF circuit 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuit 1010 can also communicate with a network and other devices through wireless communication. The above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
存储器1020可用于存储软件程序以及模块,处理器1080通过运行存储在存储器1020的软件程序以及模块,从而执行手机的各种功能应用以及数据处理。存储器1020可主要包括存储程序区和存储数据区,其中,存储程序区可存储操作系统、至少一个功能所需的应用程序(比如声音播放功能、图像播放功能等)等;存储数据区可存储根据手机的使用所创建的数据(比如音频数据、电话本等)等。此外,存储器1020可以包括高速随机存取存储器,还可以包括非易失性存储器,例如至少一个磁盘存储器件、闪存器件、或其他易失性固态存储器件。The memory 1020 may be used to store software programs and modules. The processor 1080 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1020. The memory 1020 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may store data according to Data (such as audio data, phone book, etc.) created by the use of mobile phones. In addition, the memory 1020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
输入单元1030可用于接收输入的数字或字符信息,以及产生与手机的用户设置以及功能控制有关的键信号输入。具体地,输入单元1030可包括触控面板1031以及其他输入设备1032。触控面板1031,也称为触摸屏,可收集用户在其上或附近的触摸操作(比如用户使用手指、触 笔等任何适合的物体或附件在触控面板1031上或在触控面板1031附近的操作),并根据预先设定的程式驱动相应的连接装置。可选的,触控面板1031可包括触摸检测装置和触摸控制器两个部分。其中,触摸检测装置检测用户的触摸方位,并检测触摸操作带来的信号,将信号传送给触摸控制器;触摸控制器从触摸检测装置上接收触摸信息,并将它转换成触点坐标,再送给处理器1080,并能接收处理器1080发来的命令并加以执行。此外,可以采用电阻式、电容式、红外线以及表面声波等多种类型实现触控面板1031。除了触控面板1031,输入单元1030还可以包括其他输入设备1032。具体地,其他输入设备1032可以包括但不限于物理键盘、功能键(比如音量控制按键、开关按键等)、轨迹球、鼠标、操作杆等中的一种或多种。The input unit 1030 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the mobile phone. Specifically, the input unit 1030 may include a touch panel 1031 and other input devices 1032. Touch panel 1031, also known as touch screen, can collect user's touch operations on or near it (such as the user using a finger, stylus, etc. any suitable object or accessory on touch panel 1031 or near touch panel 1031 Operation), and drive the corresponding connection device according to a preset program. Optionally, the touch panel 1031 may include two parts, a touch detection device and a touch controller. Among them, the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it To the processor 1080, and can receive the commands sent by the processor 1080 and execute them. In addition, various types such as resistive, capacitive, infrared, and surface acoustic wave can be used to implement the touch panel 1031. In addition to the touch panel 1031, the input unit 1030 may include other input devices 1032. Specifically, other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, an operation lever, and the like.
显示单元1040可用于显示由用户输入的信息或提供给用户的信息以及手机的各种菜单。显示单元1040可包括显示面板1041,可选的,可以采用液晶显示器(Liquid Crystal Display,LCD)、有机发光二极管(Organic Light-Emitting Diode,OLED)等形式来配置显示面板1041。进一步的,触控面板1031可覆盖显示面板1041,当触控面板1031检测到在其上或附近的触摸操作后,传送给处理器1080以确定触摸事件的类型,随后处理器1080根据触摸事件的类型在显示面板1041上提供相应的视觉输出。虽然在图3中,触控面板1031与显示面板1041是作为两个独立的部件来实现手机的输入和输入功能,但是在某些实施例中,可以将触控面板1031与显示面板1041集成而实现手机的输入和输出功能。The display unit 1040 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The display unit 1040 may include a display panel 1041, and optionally, the display panel 1041 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like. Further, the touch panel 1031 may cover the display panel 1041. When the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch panel 1031 transmits the touch operation to the processor 1080 to determine the type of the touch event. The type provides corresponding visual output on the display panel 1041. Although in FIG. 3, the touch panel 1031 and the display panel 1041 are implemented as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 can be integrated and Realize the input and output functions of the mobile phone.
手机还可包括至少一种传感器1050,比如光传感器、运动传感器以及其他传感器。具体地,光传感器可包括环境光传感器及接近传感器,其中,环境光传感器可根据环境光线的明暗来调节显示面板1041的亮度,接近传感器可在手机移动到耳边时,关闭显示面板1041和/或背光。作为运动传感器的一种,加速计传感器可检测各个方向上(一般为三轴)加速度的大小,静止时可检测出重力的大小及方向,可用于识别手机姿态的应用(比如横竖屏切换、相关游戏、磁力计姿态校准)、振动识别相关功能(比如计步器、敲击)等;至于手机还可配置的陀螺仪、气压 计、湿度计、温度计、红外线传感器等其他传感器,在此不再赘述。The mobile phone may further include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors. Specifically, the light sensor may include an ambient light sensor and a proximity sensor. The ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light. The proximity sensor may close the display panel 1041 and / Or backlight. As a type of motion sensor, the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary. It can be used for applications that recognize the attitude of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc .; as for the mobile phone can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, no longer here To repeat.
音频电路1060、扬声器1061,传声器1062可提供用户与手机之间的音频接口。音频电路1060可将接收到的音频数据转换后的电信号,传输到扬声器1061,由扬声器1061转换为声音信号输出;另一方面,传声器1062将收集的声音信号转换为电信号,由音频电路1060接收后转换为音频数据,再将音频数据输出处理器1080处理后,经RF电路1010以发送给比如另一手机,或者将音频数据输出至存储器1020以便进一步处理。The audio circuit 1060, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone. The audio circuit 1060 can transmit the received electrical data converted electrical signal to the speaker 1061, and the speaker 1061 converts the sound signal to an audio signal output. On the other hand, the microphone 1062 converts the collected sound signal into an electrical signal, and the audio circuit 1060 After receiving, it is converted into audio data, and then the audio data is output to the processor 1080 for processing, and then sent to, for example, another mobile phone via the RF circuit 1010, or the audio data is output to the memory 1020 for further processing.
WiFi属于短距离无线传输技术,手机通过WiFi模块1070可以帮助用户收发电子邮件、浏览网页和访问流式媒体等,它为用户提供了无线的宽带互联网访问。虽然图3示出了WiFi模块1070,但是可以理解的是,其并不属于手机的必须构成,完全可以根据需要在不改变发明的本质的范围内而省略。WiFi is a short-range wireless transmission technology. The mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1070. It provides users with wireless broadband Internet access. Although FIG. 3 shows the WiFi module 1070, it can be understood that it does not belong to the necessary configuration of the mobile phone, and can be omitted as needed without changing the essence of the invention.
处理器1080是手机的控制中心,利用各种接口和线路连接整个手机的各个部分,通过运行或执行存储在存储器1020内的软件程序和/或模块,以及调用存储在存储器1020内的数据,执行手机的各种功能和处理数据,从而对手机进行整体监控。可选的,处理器1080可包括一个或多个处理单元;优选的,处理器1080可集成应用处理器和调制解调处理器,其中,应用处理器主要处理操作系统、用户界面和应用程序等,调制解调处理器主要处理无线通信。可以理解的是,上述调制解调处理器也可以不集成到处理器1080中。The processor 1080 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone. The processor 1080 runs or executes software programs and / or modules stored in the memory 1020, and calls data stored in the memory 1020 to execute. Various functions and processing data of the mobile phone, so as to monitor the mobile phone as a whole. Optionally, the processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc. The modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1080.
手机还包括给各个部件供电的电源1090(比如电池),优选的,电源可以通过电源管理系统与处理器1080逻辑相连,从而通过电源管理系统实现管理充电、放电、以及功耗管理等功能。The mobile phone also includes a power supply 1090 (such as a battery) for supplying power to various components. Preferably, the power supply can be logically connected to the processor 1080 through a power management system, so as to implement functions such as management of charging, discharging, and power consumption management through the power management system.
尽管未示出,手机还可以包括摄像头、蓝牙模块等,在此不再赘述。Although not shown, the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
参照图3,在本发明实施例中,该移动终端所包括的处理器1080还具有以下功能:Referring to FIG. 3, in an embodiment of the present invention, the processor 1080 included in the mobile terminal also has the following functions:
获取解锁所述移动终端屏幕时的人脸特征数据;Acquiring facial feature data when the screen of the mobile terminal is unlocked;
根据所述人脸特征数据分析用户当前的心情状态;Analyze the current mood state of the user according to the facial feature data;
判断所述心情状态是否为预设的不开心心情状态;Determining whether the mood state is a preset unhappy mood state;
若是,则按照预设规则向用户推送指定的视频,以调节用户心情。If so, the specified video is pushed to the user according to a preset rule to adjust the user's mood.
所属领域的技术人员可以清楚地了解到,为描述的方便和简洁,上述描述的系统,装置和单元的具体工作过程,可以参考前述方法实施例中的对应过程,在此不再赘述。Those skilled in the art can clearly understand that, for the convenience and brevity of description, the specific working processes of the systems, devices, and units described above can refer to the corresponding processes in the foregoing method embodiments, and are not repeated here.
在本申请所提供的几个实施例中,应该理解到,所揭露的系统,装置和方法,可以通过其它的方式实现。例如,以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,例如多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。In the several embodiments provided in this application, it should be understood that the disclosed systems, devices, and methods may be implemented in other ways. For example, the device embodiments described above are only schematic. For example, the division of the unit is only a logical function division. In actual implementation, there may be another division manner. For example, multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented. In addition, the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
另外,在本发明各个实施例中的各功能单元可以集成在一个处理单元中,也可以是各个单元单独物理存在,也可以两个或两个以上单元集成在一个单元中。上述集成的单元既可以采用硬件的形式实现,也可以采用软件功能单元的形式实现。In addition, each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit. The above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分步骤是可以通过程序来指令相关的硬件完成,所述的程序可以存储于一种计算机可读存储介质中,上述提到的存储介质可以是只读存储器,磁盘或光盘等。A person of ordinary skill in the art may understand that all or part of the steps in the method of the foregoing embodiment may be implemented by a program instructing related hardware. The program may be stored in a computer-readable storage medium. The medium may be a read-only memory, a magnetic disk, or an optical disk.
以上所述仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其他相关的技术领域,均同理包括在本发明的专利保护范围内。The above is only a preferred embodiment of the present invention, and thus does not limit the patent scope of the present invention. Any equivalent structure or equivalent process transformation made by using the description and drawings of the present invention, or directly or indirectly used in other related The technical field is included in the patent protection scope of the present invention.

Claims (17)

  1. 一种推送视频的方法,其中,应用于移动终端,包括步骤:A method for pushing video, which is applied to a mobile terminal and includes the steps:
    获取解锁所述移动终端屏幕时的人脸特征数据;Acquiring facial feature data when the screen of the mobile terminal is unlocked;
    根据所述人脸特征数据分析用户当前的心情状态;Analyze the current mood state of the user according to the facial feature data;
    判断所述心情状态是否为预设的不开心心情状态;Determining whether the mood state is a preset unhappy mood state;
    若是,则按照预设规则向用户推送指定的视频,以调节用户心情。If so, the specified video is pushed to the user according to a preset rule to adjust the user's mood.
  2. 根据权利要求1所述的推送视频的方法,其中,所述根据所述人脸特征数据分析用户当前的心情状态的步骤,包括:The method for pushing video according to claim 1, wherein the step of analyzing a user's current mood state based on the facial feature data comprises:
    将所述人脸特征数据发送至服务器;Sending the facial feature data to a server;
    接收服务器将所述人脸特征数据输入到心情状态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。The receiving server inputs the facial feature data into a mood state recognition model, and obtains mood analysis data, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  3. 根据权利要求2所述的推送视频的方法,其中,所述心情状态识别模型是通过将采集的表情数据作为训练样本输入到训练模型中获得。The method for pushing video according to claim 2, wherein the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  4. 根据权利要求2所述的推送视频的方法,其中,所述判断所述心情状态是否为预设的不开心心情状态的步骤,包括:The method for pushing video according to claim 2, wherein the step of determining whether the mood state is a preset unhappy mood state comprises:
    将所述预设的不开心心情状态输入所述心情识别模型,获得第一向量;Inputting the preset unhappy mood state into the mood recognition model to obtain a first vector;
    将所述人脸特征数据输入到心情状态识别模型,获得第二向量;Inputting the facial feature data into a mood state recognition model to obtain a second vector;
    将所述第二向量和所述第一向量进行欧式距离计算;Performing Euclidean distance calculation on the second vector and the first vector;
    若所述欧式距离小于预设值,则判定所述心情状态为预设的不开心心情状态。If the Euclidean distance is less than a preset value, it is determined that the mood state is a preset unhappy mood state.
  5. 根据权利要求2所述的推送视频的方法,其中,所述按照预设规则向用户推送指定的视频的步骤,包括:The method for pushing a video according to claim 2, wherein the step of pushing a specified video to a user according to a preset rule comprises:
    获取用户当前的不开心心情状态所对应的等级;Obtain the level corresponding to the current unhappy mood state of the user;
    根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。According to the level corresponding to the unhappy mood state, a video with a preset amusement level is selected by the user.
  6. 根据权利要求5所述的推送视频的方法,其中,所述预设逗乐等级与视频的喜剧指数相关。The method for pushing a video according to claim 5, wherein the preset amusement level is related to a comedy index of the video.
  7. 根据权利要求5所述的推送视频的方法,其中,所述根据所述不开心心情状态所对应的等级,选择对应逗乐等级的视频的步骤之后,包括:The method for pushing a video according to claim 5, wherein after the step of selecting a video corresponding to an amused level according to the level corresponding to the unhappy mood state, comprising:
    实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器;Collect real-time facial feature change data during user watching video and send it to the server;
    实时接收服务器发送的分析所述人脸特征变化数据,得到的心情状态变化数据;Receive in real time the analysis of the facial feature change data sent by the server to obtain the mood state change data;
    根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适;Judging whether the pushed video is appropriate according to the change trend of the mood state change data;
    若不合适,则重新筛选服务器推送的评分较高的视频。If not, re-screen the higher-rated videos pushed by the server.
  8. 根据权利要求5-7任一项所述的推送视频的方法,其中,所述按照预设规则向用户推送指定的视频的步骤之后,包括:The method for pushing a video according to any one of claims 5-7, wherein after the step of pushing a specified video to a user according to a preset rule, comprises:
    统计用户对各推荐视频的评分数据;Count the user's rating data for each recommended video;
    根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。The one-to-one correspondence between each recommended video is updated to the video level folder corresponding to each of the rating data according to the rating data.
  9. 一种推送视频的装置,其中,包括:A device for pushing video, including:
    获取模块,用于获取解锁所述移动终端屏幕时的人脸特征数据;An obtaining module, configured to obtain facial feature data when the screen of the mobile terminal is unlocked;
    分析模块,用于根据所述人脸特征数据分析用户当前的心情状态;An analysis module, configured to analyze a user's current mood state according to the facial feature data;
    判断模块,用于判断所述心情状态是否为预设的不开心心情状态;A judging module, configured to judge whether the mood state is a preset unhappy mood state;
    推送模块,用于若是预设的不开心心情状态,则按照预设规则向用户推送指定的视频,以调节用户心情。A push module is configured to push a specified video to a user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
  10. 根据权利要求9所述的推送视频的装置,其中,所述分析模块,包括:The device for pushing video according to claim 9, wherein the analysis module comprises:
    发送单元,用于将所述人脸特征数据发送至服务器;A sending unit, configured to send the facial feature data to a server;
    第一接收单元,用于接收服务器将所述人脸特征数据输入到心情状态识别模型后,得到的心情分析数据,其中所述心情分析数据包括心情状态,以及心情状态对应的预设等级。The first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  11. 根据权利要求10所述的推送视频的装置,其中,所述心情状态识别模型是通过将采集的表情数据作为训练样本输入到训练模型中获得。The device for pushing video according to claim 10, wherein the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  12. 根据权利要求10所述的推送视频的装置,其中,所述第一接收单元还用于:The device for pushing video according to claim 10, wherein the first receiving unit is further configured to:
    接收将所述预设的不开心心情状态输入所述心情识别模型后得到的第一向量;Receiving a first vector obtained by inputting the preset unhappy mood state into the mood recognition model;
    接收将所述人脸特征数据输入到心情状态识别模型后得到的第二向量;Receiving a second vector obtained by inputting the facial feature data into a mood state recognition model;
    所述判断模块还用于:The judgment module is further configured to:
    将所述第二向量和所述第一向量进行欧式距离计算;Performing Euclidean distance calculation on the second vector and the first vector;
    若所述欧式距离小于预设值,则判定所述心情状态为预设的不开心心情状态。If the Euclidean distance is less than a preset value, it is determined that the mood state is a preset unhappy mood state.
  13. 根据权利要求9所述的推送视频的装置,其中,所述推送模块,包括:The apparatus for pushing video according to claim 9, wherein the pushing module comprises:
    获取单元,用于获取用户当前的不开心心情状态所对应的等级;An obtaining unit, configured to obtain a level corresponding to the current unhappy mood state of the user;
    选择单元,用于根据所述不开心心情状态所对应的等级,选择用户预设逗乐等级的视频。The selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  14. 根据权利要求13所述的推送视频的装置,其中,所述预设逗乐等级与视频的喜剧指数相关。The device for pushing video according to claim 13, wherein the preset amusement level is related to a comedy index of the video.
  15. 根据权利要求10所述的推送视频的装置,其中,所述推送模块,还包括:The device for pushing video according to claim 10, wherein the pushing module further comprises:
    采集单元,用于实时采集用户观看视频过程中的人脸特征变化数据并发送至服务器;An acquisition unit, which is used to collect real-time facial feature change data during a user watching a video and send the data to a server;
    第二接收单元,用于实时接收服务器发送的分析所述人脸特征变化数据,得到的心情状态变化数据;A second receiving unit, configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server;
    判断单元,用于根据所述心情状态变化数据的变化趋势,判断推送的视频是否合适;A judging unit, configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data;
    筛选单元,用于若推送的视频不合适,则重新筛选服务器推送的评分较高的视频。The filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
  16. 根据权利要求10所述的推送视频的装置,其中,所述推送视频的装置,还包括:The device for pushing video according to claim 10, wherein the device for pushing video further comprises:
    统计模块,用于统计用户对各推荐视频的评分数据;A statistics module, which is used to collect statistics on user ratings of each recommended video;
    更新模块,用于根据各所述评分数据将各推荐视频一一对应更新至各所述评分数据对应的视频等级文件夹中。An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
  17. 一种移动终端,其中,包括处理器和存储器,A mobile terminal including a processor and a memory,
    所述存储器用于存储推送视频的装置执行权利要求1-8任一项所述的推送视频的方法的程序;The memory is configured to store a program for a device that pushes a video to execute the method for pushing a video according to any one of claims 1 to 8;
    所述处理器被配置为用于执行所述存储器中存储的程序。The processor is configured to execute a program stored in the memory.
PCT/CN2019/096212 2018-07-17 2019-07-16 Mobile terminal, and method and apparatus for pushing video WO2020015657A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810785235.4 2018-07-17
CN201810785235.4A CN108989887A (en) 2018-07-17 2018-07-17 The method, apparatus of mobile terminal and pushing video

Publications (1)

Publication Number Publication Date
WO2020015657A1 true WO2020015657A1 (en) 2020-01-23

Family

ID=64549946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096212 WO2020015657A1 (en) 2018-07-17 2019-07-16 Mobile terminal, and method and apparatus for pushing video

Country Status (2)

Country Link
CN (1) CN108989887A (en)
WO (1) WO2020015657A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125145A (en) * 2021-10-19 2022-03-01 华为技术有限公司 Method and equipment for unlocking display screen

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989887A (en) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and pushing video
CN111652014A (en) * 2019-03-15 2020-09-11 上海铼锶信息技术有限公司 Eye spirit identification method
CN111797249A (en) * 2019-04-09 2020-10-20 华为技术有限公司 Content pushing method, device and equipment
CN113223718B (en) * 2021-06-02 2022-07-26 重庆医药高等专科学校 One-stop emotion releasing system

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633098A (en) * 2017-10-18 2018-01-26 维沃移动通信有限公司 A kind of content recommendation method and mobile terminal
US20180075490A1 (en) * 2016-09-09 2018-03-15 Sony Corporation System and method for providing recommendation on an electronic device based on emotional state detection
CN107809674A (en) * 2017-09-30 2018-03-16 努比亚技术有限公司 A kind of customer responsiveness acquisition, processing method, terminal and server based on video
CN107948748A (en) * 2017-11-30 2018-04-20 奇酷互联网络科技(深圳)有限公司 Recommend method, equipment, mobile terminal and the computer-readable storage medium of video
CN108228270A (en) * 2016-12-19 2018-06-29 腾讯科技(深圳)有限公司 Start resource loading method and device
CN108989887A (en) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and pushing video

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036255B (en) * 2014-06-21 2017-07-07 电子科技大学 A kind of facial expression recognizing method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075490A1 (en) * 2016-09-09 2018-03-15 Sony Corporation System and method for providing recommendation on an electronic device based on emotional state detection
CN108228270A (en) * 2016-12-19 2018-06-29 腾讯科技(深圳)有限公司 Start resource loading method and device
CN107809674A (en) * 2017-09-30 2018-03-16 努比亚技术有限公司 A kind of customer responsiveness acquisition, processing method, terminal and server based on video
CN107633098A (en) * 2017-10-18 2018-01-26 维沃移动通信有限公司 A kind of content recommendation method and mobile terminal
CN107948748A (en) * 2017-11-30 2018-04-20 奇酷互联网络科技(深圳)有限公司 Recommend method, equipment, mobile terminal and the computer-readable storage medium of video
CN108989887A (en) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 The method, apparatus of mobile terminal and pushing video

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125145A (en) * 2021-10-19 2022-03-01 华为技术有限公司 Method and equipment for unlocking display screen
CN114125145B (en) * 2021-10-19 2022-11-18 华为技术有限公司 Method for unlocking display screen, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN108989887A (en) 2018-12-11

Similar Documents

Publication Publication Date Title
WO2020015657A1 (en) Mobile terminal, and method and apparatus for pushing video
CN107580143B (en) A kind of display methods and mobile terminal
WO2021129762A1 (en) Application sharing method, electronic device and computer-readable storage medium
WO2020011077A1 (en) Notification message displaying method and terminal device
WO2020143663A1 (en) Display method and mobile terminal
US10133480B2 (en) Method for adjusting input-method keyboard and mobile terminal thereof
WO2019196691A1 (en) Keyboard interface display method and mobile terminal
CN109697008B (en) Content sharing method, terminal and computer readable storage medium
WO2020238647A1 (en) Hand gesture interaction method and terminal
WO2020024770A1 (en) Method for determining communication object, and mobile terminal
WO2020238445A1 (en) Screen recording method and terminal
CN107832601A (en) A kind of application control method and mobile terminal
WO2020119517A1 (en) Input method control method and terminal device
CN108196815A (en) A kind of adjusting method and mobile terminal of sound of conversing
WO2019076377A1 (en) Image viewing method and mobile terminal
CN108196757A (en) The setting method and mobile terminal of a kind of icon
CN110457086A (en) A kind of control method of application program, mobile terminal and server
CN108196781B (en) Interface display method and mobile terminal
CN107249085B (en) Mobile terminal electric quantity display method, mobile terminal and computer readable storage medium
CN108984066A (en) A kind of application icon display methods and mobile terminal
CN109521937B (en) Screen display control method and mobile terminal
CN107832067A (en) One kind applies update method, mobile terminal and computer-readable recording medium
CN107273025A (en) A kind of multi-screen display method, terminal and computer-readable recording medium
CN107967086B (en) Icon arrangement method and device for mobile terminal and mobile terminal
CN109814974A (en) Application Program Interface method of adjustment and mobile terminal

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19838009

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19838009

Country of ref document: EP

Kind code of ref document: A1