WO2020015657A1 - Terminal mobile, et procédé et appareil d'envoi en poussée de vidéo - Google Patents

Terminal mobile, et procédé et appareil d'envoi en poussée de vidéo Download PDF

Info

Publication number
WO2020015657A1
WO2020015657A1 PCT/CN2019/096212 CN2019096212W WO2020015657A1 WO 2020015657 A1 WO2020015657 A1 WO 2020015657A1 CN 2019096212 W CN2019096212 W CN 2019096212W WO 2020015657 A1 WO2020015657 A1 WO 2020015657A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
mood
mood state
user
data
Prior art date
Application number
PCT/CN2019/096212
Other languages
English (en)
Chinese (zh)
Inventor
高杰
Original Assignee
奇酷互联网络科技(深圳)有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 奇酷互联网络科技(深圳)有限公司 filed Critical 奇酷互联网络科技(深圳)有限公司
Publication of WO2020015657A1 publication Critical patent/WO2020015657A1/fr

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • H04N21/44218Detecting physical presence or behaviour of the user, e.g. using sensors to detect if the user is leaving the room or changes his face expression during a TV program
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/25Management operations performed by the server for facilitating the content distribution or administrating data related to end-users or client devices, e.g. end-user or client device authentication, learning user preferences for recommending movies
    • H04N21/251Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies

Definitions

  • the present invention relates to the field of computers, and in particular, to a mobile terminal and a method and device for pushing video.
  • Existing mobile phones not only have the functions of making calls and making calls, but also have the functions of wallets, entertainment functions such as playing games and watching movies, etc.
  • the mobile phones have become increasingly inseparable daily necessities, such as users' good friends accompanying the users, but the existing mobile phones are still
  • the user's current mood state cannot be identified, and a suitable accompany method cannot be given according to the user's mood state. For example, when the user is in a bad mood, he cannot help the user to adjust his mood.
  • the accompany mode of the mobile phone is still not intelligent enough, and needs to be improved.
  • the main purpose of the present invention is to provide a method for pushing video, which aims to solve the problem that the existing mobile phone cannot identify the user's mood state and cannot help the user to adjust the mood.
  • the present invention proposes a method for pushing video, which is applied to a mobile terminal and includes steps:
  • the specified video is pushed to the user according to a preset rule to adjust the user's mood.
  • the step of analyzing the current mood state of the user according to the facial feature data includes:
  • the receiving server inputs the facial feature data into a mood state recognition model, and obtains mood analysis data, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  • step of determining whether the mood state is a preset unhappy mood state includes:
  • the mood state is a preset unhappy mood state.
  • the step of pushing a specified video to a user according to a preset rule includes:
  • a video with a preset amusement level is selected by the user.
  • the preset amusement level is related to the comedy index of the video.
  • the method includes:
  • the method includes:
  • the one-to-one correspondence between each recommended video is updated to the video level folder corresponding to each of the rating data according to the rating data.
  • This application also provides a device for pushing video, which is integrated in a mobile terminal and includes:
  • An obtaining module configured to obtain facial feature data when the screen of the mobile terminal is unlocked
  • An analysis module configured to analyze a user's current mood state according to the facial feature data
  • a judging module configured to judge whether the mood state is a preset unhappy mood state
  • a push module is configured to push a specified video to a user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
  • analysis module includes:
  • a sending unit configured to send the facial feature data to a server
  • a first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the mood state recognition model is obtained by inputting the collected expression data into a training model as a training sample.
  • the first receiving unit is further configured to:
  • the judgment module is further configured to:
  • the mood state is a preset unhappy mood state.
  • the push module includes:
  • An obtaining unit configured to obtain a level corresponding to the current unhappy mood state of the user
  • the selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level is related to the comedy index of the video.
  • the push module further includes:
  • An acquisition unit which is used to collect real-time facial feature change data during a user watching a video and send the data to a server;
  • a second receiving unit configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server;
  • a judging unit configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data
  • the filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
  • the device for pushing video further includes:
  • a statistics module which is used to collect statistics on user ratings of each recommended video
  • An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
  • This application also provides a mobile terminal, including a processor and a memory,
  • the memory is configured to store a program for the device for pushing a video to execute any one of the methods for pushing a video;
  • the processor is configured to execute a program stored in the memory.
  • the identified facial feature data is brought into a mood state analysis model to analyze and obtain the current user's mood state.
  • a mood state analysis model to analyze and obtain the current user's mood state.
  • This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
  • FIG. 1 is a schematic flowchart of a method for pushing a video according to an embodiment of the present application
  • FIG. 2 is a schematic block diagram of a structure of a device for pushing video according to an embodiment of the present application
  • FIG. 3 is a schematic block diagram of a mobile terminal according to an embodiment of the present application.
  • an embodiment of the present application provides a method for pushing a video, which is applied to a mobile terminal and includes steps:
  • the mobile terminal in this embodiment includes, but is not limited to, a mobile phone.
  • the facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition.
  • the same data serves two functions, namely, for unlocking and For mood recognition, convenient and quick.
  • This embodiment obtains a user's current mood state through a mood recognition model based on face feature data.
  • the mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
  • the preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
  • a corresponding video may be pushed to the user to adjust the mood state of the user.
  • the preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
  • step S2 in this embodiment includes:
  • S20 Send the facial feature data to a server.
  • the training process of the mood recognition model in this embodiment is completed on the server side.
  • the training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone.
  • the mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training.
  • the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
  • S21 The mood analysis data obtained by the receiving server after inputting the facial feature data into a mood state recognition model, wherein the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state.
  • the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed.
  • the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user.
  • the unhappy mood state of this embodiment is a loss level
  • the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down.
  • the first classification vector in a vector Similar sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood
  • the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
  • step S4 in this embodiment includes:
  • the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
  • S41 Select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state.
  • the first classification vector, the second classification vector, and the third classification vector are output.
  • choosing to push the second-level amusement video can break the tears and laugh.
  • level amusement video it may not be able to adjust the mood, or the mood state cannot be adjusted quickly.
  • you choose to push a level three amusement video it will be over-adjusted, which will make people feel frivolous. State adjustments are best for a peaceful mood.
  • step S4 in this embodiment further includes:
  • S42 Collect real-time facial feature change data during the user watching the video and send it to the server.
  • the user after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
  • S43 Receive the analysis of the facial feature change data sent by the server in real time to obtain the mood state change data.
  • the mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data.
  • the value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
  • S44 Determine whether the pushed video is appropriate according to a change trend of the mood state change data.
  • the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
  • This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state.
  • Push higher rated videos The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
  • step S4 in another embodiment of the present application the method includes:
  • the push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
  • each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process.
  • the total score is 10 points
  • a rating of 0 to 5 corresponds to a negative review video
  • a rating of 6 to 8 corresponds to a positive review video
  • a rating of 9 to 10 corresponds to a highly rated video.
  • Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
  • the method for pushing video screens in the present application uses the identified facial feature data into a mood state analysis model in the process of face recognition and screen resolution, and analyzes to obtain the mood state of the current user. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video.
  • This application also classifies the videos that are pushed in order to push videos that are more in line with user needs, to quickly and effectively adjust the user ’s mood, and to make the way that mobile phones accompany users more intelligent.
  • an embodiment of the present application further provides a device for pushing video, which is integrated in a mobile terminal and includes:
  • the obtaining module 1 is configured to obtain facial feature data when the screen of the mobile terminal is unlocked.
  • the mobile terminal in this embodiment includes, but is not limited to, a mobile phone.
  • the facial feature data in this embodiment is facial feature data obtained when face unlocking is performed through face recognition.
  • the same data serves two functions, that is, for unlocking, and For mood recognition, convenient and quick.
  • the analysis module 2 is configured to analyze a current mood state of the user according to the facial feature data.
  • This embodiment obtains a user's current mood state through a mood recognition model based on face feature data.
  • the mood recognition model of this embodiment is obtained by collecting mood facial expression data of a large number of people, and using the collected expression data as a training sample and inputting it into a training model such as CNN.
  • the determining module 3 is configured to determine whether the mood state is a preset unhappy mood state.
  • the preset unhappy mood state in this embodiment is represented by a corresponding first vector output by a mood recognition model. After the user's current mood state passes the mood recognition model, a second vector is output, and the Euclidean distance calculation is performed between the second vector and the first vector. If the Euclidean distance is less than a preset value, indicating that the second vector is close to the first vector, the user is judged The current mood state is unhappy, otherwise it is happy.
  • the pushing module 4 is configured to push a specified video to the user according to a preset rule to adjust the user's mood if it is a preset unhappy mood state.
  • a corresponding video may be pushed to the user to adjust the mood state of the user.
  • the preset rules in this embodiment are a push method or a video selection method that can better adjust the user's mood, including pushing videos according to user preferences, or pushing high ratings based on a large number of user praise data in a large database, or according to Push preset videos at different mood status levels, or use them in combination with other push methods to push.
  • analysis module 2 includes:
  • a sending unit configured to send the facial feature data to a server.
  • the training process of the mood recognition model in this embodiment is completed on the server side.
  • the training process of the mood recognition model requires a large amount of data and calculation processes, and is completed on the server side to reduce the operating load of the mobile phone.
  • the mood recognition process of this embodiment is also completed on the server side to ensure the consistency of the mood recognition model after training.
  • the obtained facial feature data is sent to the server, and then input into the mood recognition model through the server. In order to complete the user's mood state recognition analysis.
  • the first receiving unit is configured to receive mood analysis data obtained by the server after inputting the facial feature data into a mood state recognition model, where the mood analysis data includes a mood state and a preset level corresponding to the mood state.
  • the data output by the mood recognition model in this embodiment carries a preset level corresponding to a mood state.
  • the unhappy mood state includes preset levels such as lost, relatively sad, very distressed, or distressed.
  • the unhappy mood state is refined to more accurately push videos suitable for the current mood state of the user.
  • the unhappy mood state of this embodiment is a loss level
  • the facial expression of the face is expressed as melancholy, and the corners of the mouth are pulled down.
  • the first classification vector in a vector Similar sad face feature data is input to the mood recognition model, and the second classification vector in the first vector is output; very painful or distressed face feature data is input into the mood
  • the third classification vector in the first vector is output; the first vector corresponding to the preset unhappy mood state is subdivided into several small partitions in order to more accurately select the corresponding video push.
  • the push module 4 includes:
  • the obtaining unit is configured to obtain a level corresponding to the current unhappy mood state of the user.
  • the first vector corresponding to a preset unhappy mood state is subdivided into several small Which one of the partitions is closer is to indicate which partition corresponds to the preset level.
  • the selecting unit is configured to select a video with a preset amusement level by the user according to the level corresponding to the unhappy mood state.
  • the preset amusement level video in this embodiment is related to the comedy index of the video, and includes a first level amusement video, a second level amusement video, and a third level amusement video, respectively corresponding to the first vector corresponding to the unhappy mood state.
  • the first classification vector, the second classification vector, and the third classification vector are output.
  • choosing to push the second-level amusement video can break the tears and laugh.
  • level amusement video it may not be able to adjust the mood, or the mood state cannot be adjusted quickly.
  • you choose to push a level three amusement video it will be over-adjusted, which will make people feel frivolous.
  • the unhappy mood State adjustments are best for a peaceful mood.
  • the push module 4 further includes:
  • An acquisition unit is configured to collect real-time facial feature change data during a user watching a video and send the data to a server.
  • the user after the video is pushed, the user will collect facial feature change data in real time while watching the video. For example, by collecting facial feature data every fixed period of time, forming facial feature change data during the user watching the video. Then the facial feature change data is input into the mood recognition model in turn for analysis.
  • the second receiving unit is configured to receive, in real time, the mood state change data obtained by analyzing the facial feature change data sent by the server.
  • the mood state change data in this embodiment may be formed according to an output vector value corresponding to the facial feature change data.
  • the value of the output vector value fluctuates, which constitutes the fluctuation of the mood state change data.
  • the judging unit is configured to judge whether the pushed video is appropriate according to a change trend of the mood state change data.
  • the change trend of the mood state change data is the best, and the mood state becomes better in the gentle change trend, and this is used to determine whether the pushed video is suitable. It is appropriate to meet the above mentioned gentle change trend. Otherwise it is inappropriate.
  • the filtering unit is configured to re-screen videos with higher ratings pushed by the server if the videos pushed are not suitable.
  • This embodiment is not suitable, which means that pushing the video does not have the effect of improving the mood state.
  • Push higher rated videos The videos with higher ratings in this embodiment include videos with higher ratings in user historical data, as well as videos with higher ratings in large databases.
  • the device for pushing video further includes:
  • the statistics module is used to collect statistics on user ratings of each recommended video.
  • the push video in this embodiment can be exited after the user gives a corresponding score after watching, in order to collect and count the user's rating data for each recommended video, so as to form the user's own database according to the above rating data, so as to push targeted video.
  • An update module is configured to update each of the recommended videos in a one-to-one correspondence to the video rating folder corresponding to each of the rating data according to the rating data.
  • each video is stored in a different folder correspondingly according to different ratings of the video by the user, so that the historical browsing video of the user is classified according to the folder during the storage process.
  • the total score is 10 points
  • a rating of 0 to 5 corresponds to a negative review video
  • a rating of 6 to 8 corresponds to a positive review video
  • a rating of 9 to 10 corresponds to a highly rated video.
  • Both the positive video and the particularly positive video in this embodiment include videos with different amusement levels.
  • the device for pushing a video screen of the present application brings the identified facial feature data into a mood state analysis model during the process of face recognition and screen resolution, and analyzes to obtain the current user's mood state. When the current mood is unhappy, the user can adjust the mood by pushing the corresponding video.
  • This application also classifies the videos that are pushed in order to push videos that better meet user needs, to achieve the purpose of quickly and effectively adjusting the user's mood, and to make the way that mobile phones accompany users more intelligent.
  • an embodiment of the present invention further provides a mobile terminal including a processor 1080 and a memory 1020, where the memory 1020 is configured to store a program for a device that pushes a video to execute the foregoing method for pushing a video; the processor 1080 is And configured to execute a program stored in the memory.
  • the mobile terminal may be any terminal device including a mobile phone, a tablet computer, a PDA (Personal Digital Assistant), a POS (Point of Sales), and a vehicle-mounted computer.
  • a mobile phone a mobile phone
  • PDA Personal Digital Assistant
  • POS Point of Sales
  • FIG. 3 is a block diagram showing a partial structure of a mobile phone related to a mobile terminal according to an embodiment of the present invention.
  • the mobile phone includes: a radio frequency (RF) circuit 1010, a memory 1020, an input unit 1030, a display unit 1040, a sensor 1050, an audio circuit 1060, a wireless fidelity (WiFi) module 1070, and a processor 1080 , And power supply 1090 and other components.
  • RF radio frequency
  • the RF circuit 1010 can be used for receiving and transmitting signals during information transmission and reception or during a call.
  • the downlink information of the base station is received and processed by the processor 1080; in addition, the uplink data of the design is transmitted to the base station.
  • the RF circuit 1010 includes, but is not limited to, an antenna, at least one amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like.
  • the RF circuit 1010 can also communicate with a network and other devices through wireless communication.
  • the above wireless communication can use any communication standard or protocol, including but not limited to Global System of Mobile (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (Code Division Multiple Access) Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), E-mail, Short Messaging Service (SMS), etc.
  • GSM Global System of Mobile
  • GPRS General Packet Radio Service
  • CDMA Code Division Multiple Access
  • WCDMA Wideband Code Division Multiple Access
  • LTE Long Term Evolution
  • E-mail Short Messaging Service
  • the memory 1020 may be used to store software programs and modules.
  • the processor 1080 executes various functional applications and data processing of the mobile phone by running the software programs and modules stored in the memory 1020.
  • the memory 1020 may mainly include a storage program area and a storage data area, where the storage program area may store an operating system, at least one function required application program (such as a sound playback function, an image playback function, etc.), etc .; the storage data area may store data according to Data (such as audio data, phone book, etc.) created by the use of mobile phones.
  • the memory 1020 may include a high-speed random access memory, and may also include a non-volatile memory, such as at least one magnetic disk storage device, a flash memory device, or other volatile solid-state storage devices.
  • the input unit 1030 can be used to receive inputted numeric or character information, and generate key signal inputs related to user settings and function control of the mobile phone.
  • the input unit 1030 may include a touch panel 1031 and other input devices 1032.
  • Touch panel 1031 also known as touch screen, can collect user's touch operations on or near it (such as the user using a finger, stylus, etc. any suitable object or accessory on touch panel 1031 or near touch panel 1031 Operation), and drive the corresponding connection device according to a preset program.
  • the touch panel 1031 may include two parts, a touch detection device and a touch controller.
  • the touch detection device detects the user's touch position, and detects the signal caused by the touch operation, and transmits the signal to the touch controller; the touch controller receives touch information from the touch detection device, converts it into contact coordinates, and sends it To the processor 1080, and can receive the commands sent by the processor 1080 and execute them.
  • various types such as resistive, capacitive, infrared, and surface acoustic wave can be used to implement the touch panel 1031.
  • the input unit 1030 may include other input devices 1032.
  • other input devices 1032 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, an operation lever, and the like.
  • the display unit 1040 may be used to display information input by the user or information provided to the user and various menus of the mobile phone.
  • the display unit 1040 may include a display panel 1041, and optionally, the display panel 1041 may be configured in the form of a liquid crystal display (LCD), an organic light-emitting diode (OLED), or the like.
  • the touch panel 1031 may cover the display panel 1041. When the touch panel 1031 detects a touch operation on or near the touch panel 1031, the touch panel 1031 transmits the touch operation to the processor 1080 to determine the type of the touch event. The type provides corresponding visual output on the display panel 1041.
  • the touch panel 1031 and the display panel 1041 are implemented as two independent components to implement the input and input functions of the mobile phone, in some embodiments, the touch panel 1031 and the display panel 1041 can be integrated and Realize the input and output functions of the mobile phone.
  • the mobile phone may further include at least one sensor 1050, such as a light sensor, a motion sensor, and other sensors.
  • the light sensor may include an ambient light sensor and a proximity sensor.
  • the ambient light sensor may adjust the brightness of the display panel 1041 according to the brightness of the ambient light.
  • the proximity sensor may close the display panel 1041 and / Or backlight.
  • the accelerometer sensor can detect the magnitude of acceleration in various directions (usually three axes), and can detect the magnitude and direction of gravity when it is stationary.
  • the mobile phone can be used for applications that recognize the attitude of mobile phones (such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration), vibration recognition related functions (such as pedometer, tap), etc .; as for the mobile phone can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, no longer here To repeat.
  • attitude of mobile phones such as horizontal and vertical screen switching, related Games, magnetometer attitude calibration
  • vibration recognition related functions such as pedometer, tap
  • the mobile phone can also be equipped with gyroscope, barometer, hygrometer, thermometer, infrared sensor and other sensors, no longer here To repeat.
  • the audio circuit 1060, the speaker 1061, and the microphone 1062 can provide an audio interface between the user and the mobile phone.
  • the audio circuit 1060 can transmit the received electrical data converted electrical signal to the speaker 1061, and the speaker 1061 converts the sound signal to an audio signal output.
  • the microphone 1062 converts the collected sound signal into an electrical signal, and the audio circuit 1060 After receiving, it is converted into audio data, and then the audio data is output to the processor 1080 for processing, and then sent to, for example, another mobile phone via the RF circuit 1010, or the audio data is output to the memory 1020 for further processing.
  • WiFi is a short-range wireless transmission technology.
  • the mobile phone can help users send and receive emails, browse web pages, and access streaming media through the WiFi module 1070. It provides users with wireless broadband Internet access.
  • FIG. 3 shows the WiFi module 1070, it can be understood that it does not belong to the necessary configuration of the mobile phone, and can be omitted as needed without changing the essence of the invention.
  • the processor 1080 is the control center of the mobile phone. It uses various interfaces and lines to connect various parts of the entire mobile phone.
  • the processor 1080 runs or executes software programs and / or modules stored in the memory 1020, and calls data stored in the memory 1020 to execute.
  • Various functions and processing data of the mobile phone so as to monitor the mobile phone as a whole.
  • the processor 1080 may include one or more processing units; preferably, the processor 1080 may integrate an application processor and a modem processor, wherein the application processor mainly processes an operating system, a user interface, and an application program, etc.
  • the modem processor mainly handles wireless communication. It can be understood that the foregoing modem processor may not be integrated into the processor 1080.
  • the mobile phone also includes a power supply 1090 (such as a battery) for supplying power to various components.
  • a power supply 1090 (such as a battery) for supplying power to various components.
  • the power supply can be logically connected to the processor 1080 through a power management system, so as to implement functions such as management of charging, discharging, and power consumption management through the power management system.
  • the mobile phone may further include a camera, a Bluetooth module, and the like, and details are not described herein again.
  • the processor 1080 included in the mobile terminal also has the following functions:
  • the specified video is pushed to the user according to a preset rule to adjust the user's mood.
  • the disclosed systems, devices, and methods may be implemented in other ways.
  • the device embodiments described above are only schematic.
  • the division of the unit is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored or not implemented.
  • the displayed or discussed mutual coupling or direct coupling or communication connection may be indirect coupling or communication connection through some interfaces, devices or units, which may be electrical, mechanical or other forms.
  • each functional unit in each embodiment of the present invention may be integrated into one processing unit, or each unit may exist separately physically, or two or more units may be integrated into one unit.
  • the above integrated unit may be implemented in the form of hardware or in the form of software functional unit.
  • the program may be stored in a computer-readable storage medium.
  • the medium may be a read-only memory, a magnetic disk, or an optical disk.

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Social Psychology (AREA)
  • Computing Systems (AREA)
  • Computer Networks & Wireless Communication (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

L'invention concerne un terminal mobile, et un procédé et un appareil d'envoi en poussée d'une vidéo, le procédé et l'appareil étant appliqués au terminal mobile, et le procédé comprenant les étapes suivantes : acquérir des données de caractéristiques faciales lorsque l'écran d'un terminal mobile est déverrouillé ; selon les données de caractéristiques faciales, analyser l'humeur actuelle d'un utilisateur ; déterminer si l'état d'humeur est une humeur malheureuse prédéfinie ; et si c'est le cas, envoyer en poussée une vidéo spécifique à l'utilisateur selon une règle prédéfinie, de façon à changer l'humeur de l'utilisateur. Selon la présente invention, dans le processus de déverrouillage d'un écran par reconnaissance faciale, les données de caractéristiques faciales reconnues sont fournies à un modèle d'analyse d'humeur pour une analyse afin d'obtenir l'humeur de l'utilisateur actuel ; et lorsqu'il est déterminé que l'humeur actuelle de l'utilisateur est un état malheureux, une vidéo correspondante est envoyée en poussée pour aider l'utilisateur à changer son humeur. Dans la présente invention, le classement de vidéos à envoyer en poussée est également effectué, de sorte qu'une vidéo qui se conforme mieux aux besoins de l'utilisateur soit envoyée en poussée, afin d'atteindre l'objectif de changer rapidement et efficacement l'humeur de l'utilisateur, et permettre à un téléphone mobile d'accompagner de manière plus intelligente l'utilisateur.
PCT/CN2019/096212 2018-07-17 2019-07-16 Terminal mobile, et procédé et appareil d'envoi en poussée de vidéo WO2020015657A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810785235.4 2018-07-17
CN201810785235.4A CN108989887A (zh) 2018-07-17 2018-07-17 移动终端和推送视频的方法、装置

Publications (1)

Publication Number Publication Date
WO2020015657A1 true WO2020015657A1 (fr) 2020-01-23

Family

ID=64549946

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/096212 WO2020015657A1 (fr) 2018-07-17 2019-07-16 Terminal mobile, et procédé et appareil d'envoi en poussée de vidéo

Country Status (2)

Country Link
CN (1) CN108989887A (fr)
WO (1) WO2020015657A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125145A (zh) * 2021-10-19 2022-03-01 华为技术有限公司 显示屏解锁的方法及其设备

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108989887A (zh) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 移动终端和推送视频的方法、装置
CN111652014A (zh) * 2019-03-15 2020-09-11 上海铼锶信息技术有限公司 一种眼神识别方法
CN111797304A (zh) * 2019-04-09 2020-10-20 华为技术有限公司 一种内容推送方法、装置与设备
CN111797249A (zh) * 2019-04-09 2020-10-20 华为技术有限公司 一种内容推送方法、装置与设备
CN113223718B (zh) * 2021-06-02 2022-07-26 重庆医药高等专科学校 一站式情绪宣泄系统

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107633098A (zh) * 2017-10-18 2018-01-26 维沃移动通信有限公司 一种内容推荐方法及移动终端
US20180075490A1 (en) * 2016-09-09 2018-03-15 Sony Corporation System and method for providing recommendation on an electronic device based on emotional state detection
CN107809674A (zh) * 2017-09-30 2018-03-16 努比亚技术有限公司 一种基于视频的用户反应获取、处理方法、终端及服务器
CN107948748A (zh) * 2017-11-30 2018-04-20 奇酷互联网络科技(深圳)有限公司 推荐视频的方法、设备、移动终端及计算机存储介质
CN108228270A (zh) * 2016-12-19 2018-06-29 腾讯科技(深圳)有限公司 启动资源加载方法及装置
CN108989887A (zh) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 移动终端和推送视频的方法、装置

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104036255B (zh) * 2014-06-21 2017-07-07 电子科技大学 一种人脸表情识别方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180075490A1 (en) * 2016-09-09 2018-03-15 Sony Corporation System and method for providing recommendation on an electronic device based on emotional state detection
CN108228270A (zh) * 2016-12-19 2018-06-29 腾讯科技(深圳)有限公司 启动资源加载方法及装置
CN107809674A (zh) * 2017-09-30 2018-03-16 努比亚技术有限公司 一种基于视频的用户反应获取、处理方法、终端及服务器
CN107633098A (zh) * 2017-10-18 2018-01-26 维沃移动通信有限公司 一种内容推荐方法及移动终端
CN107948748A (zh) * 2017-11-30 2018-04-20 奇酷互联网络科技(深圳)有限公司 推荐视频的方法、设备、移动终端及计算机存储介质
CN108989887A (zh) * 2018-07-17 2018-12-11 奇酷互联网络科技(深圳)有限公司 移动终端和推送视频的方法、装置

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114125145A (zh) * 2021-10-19 2022-03-01 华为技术有限公司 显示屏解锁的方法及其设备
CN114125145B (zh) * 2021-10-19 2022-11-18 华为技术有限公司 显示屏解锁的方法、电子设备及存储介质

Also Published As

Publication number Publication date
CN108989887A (zh) 2018-12-11

Similar Documents

Publication Publication Date Title
WO2020015657A1 (fr) Terminal mobile, et procédé et appareil d'envoi en poussée de vidéo
CN107580143B (zh) 一种显示方法及移动终端
WO2020011077A1 (fr) Procédé d'affichage de messages de notification et dispositif terminal
WO2020143663A1 (fr) Procédé d'affichage et terminal mobile
US10133480B2 (en) Method for adjusting input-method keyboard and mobile terminal thereof
WO2019196691A1 (fr) Procédé d'affichage d'interface de clavier et terminal mobile
CN109697008B (zh) 一种内容分享方法、终端及计算机可读存储介质
WO2020238647A1 (fr) Procédé et terminal d'interaction par geste de la main
WO2020024770A1 (fr) Procédé pour déterminer un objet de communication, et terminal mobile
WO2020238445A1 (fr) Procédé d'enregistrement d'écran et terminal
CN107832601A (zh) 一种应用程序控制方法及移动终端
WO2019076377A1 (fr) Procédé de visualisation d'image et terminal mobile
CN109521937B (zh) 一种屏幕显示控制方法及移动终端
WO2020119517A1 (fr) Procédé de commande de procédé d'entrée et dispositif terminal
CN108984066A (zh) 一种应用程序图标显示方法及移动终端
CN108196815A (zh) 一种通话声音的调节方法和移动终端
CN108196757A (zh) 一种图标的设置方法及移动终端
CN110457086A (zh) 一种应用程序的控制方法、移动终端及服务器
CN108196781B (zh) 界面的显示方法和移动终端
CN107249085B (zh) 移动终端电量显示方法、移动终端及计算机可读存储介质
CN111313114B (zh) 一种充电方法及电子设备
CN108628534B (zh) 一种字符展示方法及移动终端
CN107832067A (zh) 一种应用更新方法、移动终端和计算机可读存储介质
CN107273025A (zh) 一种分屏显示方法、终端及计算机可读存储介质
CN107967086B (zh) 一种移动终端的图标排列方法及装置、移动终端

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19838009

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19838009

Country of ref document: EP

Kind code of ref document: A1