CN116405736A - Video recommendation method, device, electronic equipment and storage medium - Google Patents

Video recommendation method, device, electronic equipment and storage medium Download PDF

Info

Publication number
CN116405736A
CN116405736A CN202310370902.3A CN202310370902A CN116405736A CN 116405736 A CN116405736 A CN 116405736A CN 202310370902 A CN202310370902 A CN 202310370902A CN 116405736 A CN116405736 A CN 116405736A
Authority
CN
China
Prior art keywords
user
video
information
state
interaction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202310370902.3A
Other languages
Chinese (zh)
Other versions
CN116405736B (en
Inventor
张敬相
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310370902.3A priority Critical patent/CN116405736B/en
Publication of CN116405736A publication Critical patent/CN116405736A/en
Application granted granted Critical
Publication of CN116405736B publication Critical patent/CN116405736B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4668Learning process for intelligent management, e.g. learning user preferences for recommending movies for recommending content, e.g. movies
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/442Monitoring of processes or resources, e.g. detecting the failure of a recording device, monitoring the downstream bandwidth, the number of times a movie has been viewed, the storage space available from the internal hard disk
    • H04N21/44213Monitoring of end-user related data
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/466Learning process for intelligent management, e.g. learning user preferences for recommending movies
    • H04N21/4667Processing of monitored end-user data, e.g. trend analysis based on the log file of viewer selections

Abstract

The disclosure provides a video recommendation method, a video recommendation device, electronic equipment and a storage medium, and relates to the field of artificial intelligence, in particular to the field of big data and computer vision. The specific implementation scheme is as follows: a video recommendation method, the method comprising: acquiring interaction association information between a user and user equipment when watching video; responding to the interaction association information, and acquiring state information of the state of the user; determining video viewing preference information matching the state in response to the state information; and generating a target video set and recommending the target video set to the user in response to the video watching preference information.

Description

Video recommendation method, device, electronic equipment and storage medium
Technical Field
The disclosure relates to the field of artificial intelligence, in particular to the field of big data and computer vision, and provides a video recommendation method, a video recommendation device, electronic equipment and a storage medium.
Background
At present, video APP (application program) generally has an automatic playing function when playing, namely, the next video can be automatically played after the current video is completely played, so that the interaction time of a user is greatly saved.
However, when in the automatic playing mode, the relevance of video recommendation is greatly weakened due to the influence of the user on the video intervention degree (such as whether the user slides, whether the user has interaction such as praise, comments and the like), so that key core indexes such as recommendation accuracy, video distribution amount, user playing time length and the like are influenced; if the video is played for a long time, the user portrait is even disturbed, so that video recommendation cannot meet the watching experience of the user, and the situation that the using experience of the user is reduced exists.
Disclosure of Invention
The disclosure provides a video recommendation method, a video recommendation device, electronic equipment and a storage medium.
According to an aspect of the present disclosure, there is provided a video recommendation method, the method including:
acquiring interaction association information between a user and user equipment when watching video;
responding to the interaction association information, and acquiring state information of the state of the user;
determining video viewing preference information matching the state in response to the state information;
and generating a target video set and recommending the target video set to the user in response to the video watching preference information.
According to another aspect of the present disclosure, there is provided a video recommendation apparatus, the apparatus including:
The interactive association information acquisition module is used for acquiring interactive association information between the user and the user equipment when the user watches the video;
the state information acquisition module is used for responding to the interaction associated information and acquiring state information of the state of the user;
a viewing preference determining module for determining video viewing preference information matching the state in response to the state information;
and the video recommendation module is used for responding to the video watching preference information, generating a target video set and recommending the target video set to the user.
According to another aspect of the present disclosure, there is provided an electronic apparatus including:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method described above.
According to another aspect of the present disclosure, there is provided a non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the above-described method.
According to another aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements the method described above.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a flowchart of a video recommendation method according to a first embodiment of the present disclosure;
fig. 2 is a schematic view of a video recommendation method according to a second embodiment of the disclosure;
fig. 3 is a schematic block diagram of a video recommendation device according to a third embodiment of the disclosure;
fig. 4 is a schematic structural diagram of an electronic device implementing a video recommendation method according to a fourth embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example 1
As shown in fig. 1, the video recommendation method of the present embodiment includes:
s101, acquiring interaction association information between a user and user equipment when watching video;
specifically, when the user uses the video playing application program to watch the video, the user device is held by hand or placed on a certain support to be fixed and the like to interact with the user device, and in addition, the user device can interact with the user device through touch operation and the like. The user equipment comprises, but is not limited to, a mobile phone and a tablet personal computer.
S102, responding to the interaction association information, and acquiring state information of a state where a user is located;
specifically, the interaction condition between the user and the user equipment is monitored in real time or at preset monitoring frequency, and the actual behavior state of the user with high probability is obtained, such as a state of driving, a state of standing the mobile phone, continuing to play the video and then drying other things.
S103, responding to the state information, and determining video watching preference information matched with the state;
specifically, the actual behavior state of the user is dynamically monitored to determine the corresponding video viewing preference under different states; if the user runs, determining that the user prefers videos such as 'video picture relatively still' or 'speaking person'; if the user holds the mobile phone, the user is determined to prefer videos with better visual experience.
And S104, generating a target video set and recommending the target video set to a user in response to the video watching preference information.
In addition, the video watching requirement of the user can be accurately determined by combining the user portrait, so that a more accurate video recommending effect is achieved, and the video watching experience of the user is further improved.
Specifically, generating target video sets adapted to the viewing preference directions, wherein the target video sets can be multiple groups, and each group of target video sets comprises a plurality of videos (such as 4 videos); the number of the target video sets and the number of videos corresponding to each target video set are determined through preset; readjustment may also be performed with user settings.
The video recommendation method of the embodiment is suitable for an automatic playing mode and a non-automatic playing mode.
According to the scheme, through monitoring the interactive association information between the user and the user equipment, the current actual behavior state and the like of the user when watching the video can be reliably analyzed, and the watching preference of the user can be accurately determined, so that the correlation of video recommendation is greatly increased, high-quality video content is recommended to the user, the accuracy of video recommendation is effectively improved, and the watching experience of the user is improved; meanwhile, the user portrait is not disturbed under the condition of long-term playing, so that the accuracy of the user portrait data is ensured; in addition, the method is favorable for greatly improving key core indexes such as video distribution quantity, user playing time length and the like in service.
In one embodiment, step S101 includes:
s1011, acquiring the motion state of the user equipment when the user watches the video, and/or acquiring the interaction information of the user and the user equipment.
In the scheme, gyroscope data are adopted to acquire the motion state of user equipment in the process of watching video, and whether the user equipment is held by the user or is in a standing state is determined; and the interaction information of the user and the user equipment, determining whether the user touches the user equipment, such as sliding, praise, comment and other interactions; analyzing to obtain the actual state of the user by alternatively considering the motion state of the user equipment and/or the interaction information of the user and the user equipment; parameters of the two dimensions can be considered together to determine the actual state of the user together, so that the feasibility and the reliability of the actual state analysis are guaranteed, and the accuracy of video recommendation is further guaranteed.
In addition, the accuracy of the user portrait can be improved to a certain extent by collecting the information of the user equipment, the user information, the interaction information of the user and the user equipment and the like, so that the accuracy of video recommendation is further improved.
In one embodiment, the interaction information includes at least one of the following:
Touch operation information when a user touches the screen, voice acquisition information and video acquisition information of an environment where the user is located are acquired by the user equipment.
Specifically, the voice acquisition information and/or the video acquisition information of the environment where the user is located are acquired by triggering the user equipment after the user feedback (namely after authorization permission).
Of course, the interaction information may also include other information that can characterize the interaction between the user and the user device, which is not described herein.
The touch operation information includes, but is not limited to, information of whether there is a touch, a touch duration, a touch direction, and the like.
The interaction between the user and the user equipment can be a user touch screen for sliding, praying, commenting and the like of video content, and can also be voice acquisition information, video acquisition information and the like acquired based on the user equipment. Specifically, the server side sends an information acquisition request to the application program side of the user, and after the user agrees to the request, the user equipment is triggered to start a microphone to acquire audio in a scene where the user is located, and a camera is triggered to start to acquire corresponding video in the scene; the stored data of scene images, scene videos, audio in scenes and the like which are related to other interaction scenes of the user can be used for capturing various information of the environment where the user is located. For example, when the audio information of 'welcome coming, requesting to scan the two-dimension code on the corresponding desktop' exists in the scene where the user is located, determining that the user is in a restaurant, and the like; and if the driver seat exists in the scene where the user is located, determining that the user is in the vehicle.
In the scheme, the user can be effectively identified by acquiring the touch interaction between the user and the user equipment and acquiring the related information of the user terminal by the user equipment, the correlation of video recommendation is increased, and the recommendation of high-quality video content to the user is ensured.
In one embodiment, step S102 includes:
s1021, analyzing and obtaining state information of the state of the user according to the motion state and the interaction information.
In the scheme, the real-time state of the user can be accurately analyzed by further acquiring the touch interaction between the user and the user equipment to acquire the related information of the user side and then combining the real-time gyroscope data of the user equipment, so that the accuracy of the follow-up video recommendation is ensured.
Specifically, as shown in fig. 2, an AI (artificial intelligence) MODEL (MODEL) is trained based on a server through gyroscope data (D1) and the like, that is, MODEL training is completed at the server, and then the server issues the trained AI MODEL to a client (i.e., user equipment); the user device collects Acceleration Tolerance (AT) in each direction by collecting gyroscope data (D1), tilt angle (DTA) of the user device, and the like, and determines the state of the user device according to the relevant data and the trained AI model, including but not limited to: stationary, acceleration in a single direction, following in the user's motion (e.g., running), following in the user's walk.
The actual state of the user is determined by considering the gyroscope data (D1), the touch operation information (ST) and the like, so that video watching preference matched with the actual state is obtained, and a final video pushing result is determined.
In one embodiment, step S1021 includes:
when the motion state representation is static and the interaction information representation is not interactive, analyzing to obtain a use state that the user is in static state and in audio listening state of the user equipment;
at this time, the user places the mobile phone on the stand, the desktop, etc. with a high probability, and the user does not operate the mobile phone any more in the video playing process.
When the motion state representation is static and the interaction information representation has interaction, analyzing to obtain a use state that the user is in static state and in video watching of the user equipment;
at this time, the user places the mobile phone on a stand, a desktop, etc. with a high probability, and continuously uses the mobile phone in the video playing process.
When the motion state representation is accelerated in different directions and the interactive information representation has interaction, analyzing to obtain the use state of the user in the video watching state of the handheld user equipment;
at this time, the user is walking with a high probability.
When the state is accelerated in different directions in the motion state representation and the interaction information representation is not interacted, analyzing to obtain a use state that the user is in the motion state and in audio listening;
At this time, the user is in a state of exercise (e.g., running) with a high probability.
And when the motion state representation is accelerated in a single direction and the interaction information representation is not interacted, analyzing to obtain that the user is in a driving state.
At this time, the user is driving with a high probability.
According to the scheme, different states of the user equipment and interaction conditions between the user and the user equipment are synchronously considered, subdivision identification of different states of the user can be achieved under different combination conditions corresponding to different actual states of the user, accurate analysis is achieved, high-precision video recommendation is conducted, and video watching experience of the user is effectively improved.
In one embodiment, step S103 includes:
determining that the preference of the user's hearing demand is higher than the visual demand when the user is in a use state in which the user device is stationary and in audio listening;
at this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", that is, a video in which hearing is dominant, to the user.
Determining that the preference of the visual demand of the user is higher than the auditory demand when the user is in a use state in which the user device is stationary and in video viewing;
at this time, a video of which visual experience is more excellent, that is, a video of which visual dominance tends to be recommended to the user.
Determining that the preference of the visual requirement of the user is higher than the auditory requirement when the user is in a use state of holding the user equipment to watch the video;
at this time, a video of which visual experience is more excellent, that is, a video of which visual dominance tends to be recommended to the user.
Determining that the preference of the hearing demand of the user is higher than the visual demand when the user is in a use state of holding the user equipment and listening to the audio;
at this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", that is, a video in which hearing is dominant, to the user.
When the user is in a driving state, the preference of the hearing demand of the user is determined to be higher than the visual demand.
At this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", i.e., a video in which hearing is dominant, to the user;
in addition, other dimension information such as time can be combined, recommended videos can be further screened, for example, when the current driving time is 7:00 a.m., related videos such as 'breakfast', 'traffic information' or 'information' are recommended to a driving user, so that a more accurate video recommendation effect is achieved, and further the viewing experience of the user is further achieved.
In an embodiment, step S104 further includes:
when the user is in a driving state, interaction between the user and the user equipment is detected, and the interaction operation meets the preset condition, reminding information representing prompt safe driving is generated.
In the scheme, in the video recommendation scene, if the analysis results in that the user frequently operates the user equipment in driving, the user is timely reminded of paying attention to driving safety, if the recommendation display content is relatively static and contains video of audio information for reminding safe driving, the safe driving is further guaranteed, and the use experience of the user is improved.
In one embodiment, the method further comprises:
when the video watching preference information indicates that the preference of the hearing requirement of the user is higher than the visual requirement, reducing the distribution quantity of the video advertisements to a first set threshold value;
when the video viewing preference information characterizes that the preference of the visual requirement of the user is higher than the auditory requirement, increasing the distribution number of the video advertisements to a second set threshold.
The number of video advertisements distributed is specifically reduced or increased, and can be predetermined or adjusted according to actual situations.
In the scheme, the actual states of the user in different stages are analyzed in real time and accurately, and viewing preferences in corresponding states are matched to recommend videos; considering the video advertisement delivery in the video recommendation, the promotion effect returned under the video with dominant visual demand is better, so that the distribution quantity of the video advertisements of different states of the user is flexibly and dynamically adjusted in a targeted manner; when a user starts a car, advertisement distribution is reduced, and when the user holds a mobile phone and interacts with the mobile phone, the advertisement distribution quantity is increased or the original advertisement distribution quantity is kept, so that the reasonability of advertisement resource delivery is ensured, unnecessary resource waste is avoided, and a video recommendation scheme is further optimized.
In one embodiment, step S101 includes:
acquiring actual input information input by a user to user equipment when the user watches video;
wherein the actual input information includes at least one of the following information:
voice input information, video input information, and text input information.
In the scheme, when the user watches the video, the user can directly input contents such as voice, video and text based on a voice input function, a video input function, a text input function and the like supported by the application program, namely, active, real-time and dynamic feedback of the user is supported, so that the video is recommended in a targeted mode, the video recommendation scheme is effectively optimized, the quality of final video recommendation is guaranteed, and the video watching experience of the user is improved.
In one embodiment, step S102 includes:
s1022, analyzing and processing the actual input information to determine the state information of the state of the user.
In the scheme, contents such as voice, video and text input by the user are analyzed in time to determine the actual state of the user, namely the accuracy of determining the state information of the state of the user is further ensured by relying on the feedback content of the user, so that the quality of final video recommendation is ensured, and the video watching experience of the user is improved.
It should be noted that, the two ways of obtaining the state information in steps S1022 and S1021 may be considered according to the requirement, so as to further improve the accuracy of determining the state information of the state where the user is located; the specific mode can be predetermined or adjusted according to the actual scene requirement.
In an embodiment, step S104 further includes:
when the current video playing of the user in the target video set is detected, the user exits from watching, and the playing completion rate of the current video is obtained;
and when the finishing rate is larger than the third set threshold value and the user enters to watch again, controlling to play the current video continuously or directly next target video set.
In the scheme, when a user watches any video in a video set generated based on state information, the situation of exiting watching occurs, and then the complete playing rate of the current video is automatically obtained; if the finishing rate is very high, determining whether the user is interested in the current video, reserving a playing node of the current video, and continuing to automatically play the current video when the user enters an application program for watching next time so as to continuously finish watching the interested video; or directly pushing the next target video set to the user, and sequentially playing a plurality of videos in the target video set generated in advance to the user so as to ensure orderly and reliable video recommendation.
In an embodiment, step S104 further includes:
when the current video playing of the user in the target video set is detected, the user quits watching, and the information of the quit reason of the user is obtained through analysis;
and updating the target video set based on the exit reason information, and performing video recommendation based on the updated target video set.
In the scheme, when a user watches any video in the video set generated based on the state information, the situation of exiting watching occurs, specific reasons of exiting the user can be automatically analyzed, if the user does not want to continue watching the video and actively exits, and the like, the user does not like the current video or the video of the same type as the current video according to specific exiting determination, the video in the generated target video set is updated, if the video in the target video set, which is similar to the current video type of exiting the user, is eliminated, and the like, so that the recommendation quality of the video in the final pushed target video set is optimized, and the video recommendation precision is ensured.
It should be noted that, in the video recommendation scheme of this embodiment, all operations related to collecting user associated data are performed on the premise of being allowed based on user authorization, so that reasonable and legal conditions are completed, and security of self information of a user is guaranteed while video recommendation is guaranteed, so that use experience of the user is further improved.
Example 2
As shown in fig. 3, the video recommendation apparatus of the present embodiment includes:
the interactive association information acquisition module 31 is configured to acquire interactive association information between a user and a user device when the user views a video;
specifically, when the user uses the video playing application program to watch the video, the user device is held by hand or placed on a certain support to be fixed and the like to interact with the user device, and in addition, the user device can interact with the user device through touch operation and the like. The user equipment comprises, but is not limited to, a mobile phone and a tablet personal computer.
A state information obtaining module 32, configured to obtain state information of a state in which the user is located in response to the interaction related information;
specifically, the interaction condition between the user and the user equipment is monitored in real time or at preset monitoring frequency, and the actual behavior state of the user with high probability is obtained, such as a state of driving, a state of standing the mobile phone, continuing to play the video and then drying other things.
A viewing preference determination module 33 for determining video viewing preference information matching the state in response to the state information;
specifically, the actual behavior state of the user is dynamically monitored to determine the corresponding video viewing preference under different states; if the user runs, determining that the user prefers videos such as 'video picture relatively still' or 'speaking person'; if the user holds the mobile phone, the user is determined to prefer videos with better visual experience.
The video recommendation module 34 is configured to generate and recommend a target video set to a user in response to the video viewing preference information.
In addition, the video watching requirement of the user can be accurately determined by combining the user portrait, so that a more accurate video recommending effect is achieved, and the video watching experience of the user is further improved.
Specifically, generating target video sets adapted to the viewing preference directions, wherein the target video sets can be multiple groups, and each group of target video sets comprises a plurality of videos (such as 4 videos); the number of the target video sets and the number of videos corresponding to each target video set are determined through preset; readjustment may also be performed with user settings.
The video recommendation method of the embodiment is suitable for an automatic playing mode and a non-automatic playing mode.
According to the scheme, through monitoring the interactive association information between the user and the user equipment, the current actual behavior state and the like of the user when watching the video can be reliably analyzed, and the watching preference of the user can be accurately determined, so that the correlation of video recommendation is greatly increased, high-quality video content is recommended to the user, the accuracy of video recommendation is effectively improved, and the watching experience of the user is improved; meanwhile, the user portrait is not disturbed under the condition of long-term playing, so that the accuracy of the user portrait data is ensured; in addition, the method is favorable for greatly improving key core indexes such as video distribution quantity, user playing time length and the like in service.
In an embodiment, the interactive related information obtaining module 31 is further configured to obtain a motion state of the user device when the user views the video, and/or interactive information between the user and the user device.
In the scheme, gyroscope data are adopted to acquire the motion state of user equipment in the process of watching video, and whether the user equipment is held by the user or is in a standing state is determined; and the interaction information of the user and the user equipment, determining whether the user touches the user equipment, such as sliding, praise, comment and other interactions; analyzing to obtain the actual state of the user by alternatively considering the motion state of the user equipment and/or the interaction information of the user and the user equipment; parameters of the two dimensions can be considered together to determine the actual state of the user together, so that the feasibility and the reliability of the actual state analysis are guaranteed, and the accuracy of video recommendation is further guaranteed.
In addition, the accuracy of the user portrait can be improved to a certain extent by collecting the information of the user equipment, the user information, the interaction information of the user and the user equipment and the like, so that the accuracy of video recommendation is further improved.
In one embodiment, the interaction information includes at least one of the following:
Touch operation information when a user touches the screen, voice acquisition information and video acquisition information of an environment where the user is located are acquired by the user equipment.
Specifically, the voice acquisition information and/or the video acquisition information of the environment where the user is located are acquired by triggering the user equipment after the user feedback (namely after authorization permission).
Of course, the interaction information may also include other information that can characterize the interaction between the user and the user device, which is not described herein.
The touch operation information includes, but is not limited to, information of whether there is a touch, a touch duration, a touch direction, and the like.
The interaction between the user and the user equipment can be a user touch screen for sliding, praying, commenting and the like of video content, and can also be voice acquisition information, video acquisition information and the like acquired based on the user equipment. Specifically, the server side sends an information acquisition request to the application program side of the user, and after the user agrees to the request, the user equipment is triggered to start a microphone to acquire audio in a scene where the user is located, and a camera is triggered to start to acquire corresponding video in the scene; the stored data of scene images, scene videos, audio in scenes and the like which are related to other interaction scenes of the user can be used for capturing various information of the environment where the user is located. For example, when the audio information of 'welcome coming, requesting to scan the two-dimension code on the corresponding desktop' exists in the scene where the user is located, determining that the user is in a restaurant, and the like; and if the driver seat exists in the scene where the user is located, determining that the user is in the vehicle.
In the scheme, the user can be effectively identified by acquiring the touch interaction between the user and the user equipment and acquiring the related information of the user terminal by the user equipment, the correlation of video recommendation is increased, and the recommendation of high-quality video content to the user is ensured.
In an embodiment, the status information obtaining module 32 is further configured to analyze status information of the status of the user according to the motion status and the interaction information.
In the scheme, the real-time state of the user can be accurately analyzed by further acquiring the touch interaction between the user and the user equipment to acquire the related information of the user side and then combining the real-time gyroscope data of the user equipment, so that the accuracy of the follow-up video recommendation is ensured.
Specifically, as shown in fig. 2, an AI (artificial intelligence) MODEL (MODEL) is trained based on a server through gyroscope data (D1) and the like, that is, MODEL training is completed at the server, and then the server issues the trained AI MODEL to a client (i.e., user equipment); the user device collects Acceleration Tolerance (AT) in each direction by collecting gyroscope data (D1), tilt angle (DTA) of the user device, and the like, and determines the state of the user device according to the relevant data and the trained AI model, including but not limited to: stationary, acceleration in a single direction, following in the user's motion (e.g., running), following in the user's walk.
The actual state of the user is determined by considering the gyroscope data (D1), the touch operation information (ST) and the like, so that video watching preference matched with the actual state is obtained, and a final video pushing result is determined.
In an embodiment, the state information obtaining module 32 is further configured to analyze the user to be in a use state of standing the user device and listening to audio when the motion state representation is at rest and the interaction information representation is not interacted with;
at this time, the user places the mobile phone on the stand, the desktop, etc. with a high probability, and the user does not operate the mobile phone any more in the video playing process.
When the motion state representation is static and the interaction information representation has interaction, analyzing to obtain a use state that the user is in static state and in video watching of the user equipment;
at this time, the user places the mobile phone on a stand, a desktop, etc. with a high probability, and continuously uses the mobile phone in the video playing process.
When the motion state representation is accelerated in different directions and the interactive information representation has interaction, analyzing to obtain the use state of the user in the video watching state of the handheld user equipment;
at this time, the user is walking with a high probability.
When the state is accelerated in different directions in the motion state representation and the interaction information representation is not interacted, analyzing to obtain a use state that the user is in the motion state and in audio listening;
At this time, the user is in a state of exercise (e.g., running) with a high probability.
And when the motion state representation is accelerated in a single direction and the interaction information representation is not interacted, analyzing to obtain that the user is in a driving state.
At this time, the user is driving with a high probability.
According to the scheme, different states of the user equipment and interaction conditions between the user and the user equipment are synchronously considered, subdivision identification of different states of the user can be achieved under different combination conditions corresponding to different actual states of the user, accurate analysis is achieved, high-precision video recommendation is conducted, and video watching experience of the user is effectively improved.
In an embodiment, the viewing preference determining module 33 is further configured to determine that the user's hearing demand is better than the visual demand when the user is in a use state in which the user device is stationary and in audio listening;
at this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", that is, a video in which hearing is dominant, to the user.
Determining that the preference of the visual demand of the user is higher than the auditory demand when the user is in a use state in which the user device is stationary and in video viewing;
at this time, a video of which visual experience is more excellent, that is, a video of which visual dominance tends to be recommended to the user.
Determining that the preference of the visual requirement of the user is higher than the auditory requirement when the user is in a use state of holding the user equipment to watch the video;
at this time, a video of which visual experience is more excellent, that is, a video of which visual dominance tends to be recommended to the user.
Determining that the preference of the hearing demand of the user is higher than the visual demand when the user is in a use state of holding the user equipment and listening to the audio;
at this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", that is, a video in which hearing is dominant, to the user.
When the user is in a driving state, the preference of the hearing demand of the user is determined to be higher than the visual demand.
At this time, it is intended to recommend a video such as "video picture is relatively still" or "speaker", i.e., a video in which hearing is dominant, to the user; in addition, other dimension information such as time can be combined, recommended videos can be further screened, for example, when the current driving time is 7:00 a.m., related videos such as 'breakfast', 'traffic information' or 'information' are recommended to a driving user, so that a more accurate video recommendation effect is achieved, and further the viewing experience of the user is further achieved.
In one embodiment, the apparatus further comprises:
The safety reminding module is used for generating reminding information representing reminding of safe driving when detecting that a user is in a driving state, interaction is carried out between the user and user equipment and interaction operation meets preset conditions.
In the scheme, in the video recommendation scene, if the analysis results in that the user frequently operates the user equipment in driving, the user is timely reminded of paying attention to driving safety, if the recommendation display content is relatively static and contains video of audio information for reminding safe driving, the safe driving is further guaranteed, and the use experience of the user is improved.
In one embodiment, the apparatus further comprises:
the distribution quantity reduction module is used for reducing the distribution quantity of the video advertisements to a first set threshold value when the video watching preference information indicates that the preference of the hearing demand of the user is higher than the visual demand;
and the distribution quantity increasing module is used for increasing the distribution quantity of the video advertisements to a second set threshold value when the preference of the video watching preference information representing the visual requirement of the user is higher than the hearing requirement.
The number of video advertisements distributed is specifically reduced or increased, and can be predetermined or adjusted according to actual situations.
In the scheme, the actual states of the user in different stages are analyzed in real time and accurately, and viewing preferences in corresponding states are matched to recommend videos; considering the video advertisement delivery in the video recommendation, the promotion effect returned under the video with dominant visual demand is better, so that the distribution quantity of the video advertisements of different states of the user is flexibly and dynamically adjusted in a targeted manner; when a user starts a car, advertisement distribution is reduced, and when the user holds a mobile phone and interacts with the mobile phone, the advertisement distribution quantity is increased or the original advertisement distribution quantity is kept, so that the reasonability of advertisement resource delivery is ensured, unnecessary resource waste is avoided, and a video recommendation scheme is further optimized.
In an embodiment, the interactive related information obtaining module 31 is further configured to obtain actual input information input by the user to the user device when the user views the video;
wherein the actual input information includes at least one of the following information:
voice input information, video input information, and text input information.
In the scheme, when the user watches the video, the user can directly input contents such as voice, video and text based on a voice input function, a video input function, a text input function and the like supported by the application program, namely, active, real-time and dynamic feedback of the user is supported, so that the video is recommended in a targeted mode, the video recommendation scheme is effectively optimized, the quality of final video recommendation is guaranteed, and the video watching experience of the user is improved.
In one embodiment, the status information obtaining module 32 is further configured to analyze the actual input information to determine status information of the status of the user.
In the scheme, contents such as voice, video and text input by the user are analyzed in time to determine the actual state of the user, namely the accuracy of determining the state information of the state of the user is further ensured by relying on the feedback content of the user, so that the quality of final video recommendation is ensured, and the video watching experience of the user is improved.
In one embodiment, the apparatus further comprises:
the system comprises a complete playing rate acquisition module, a target video set and a target video set, wherein the complete playing rate acquisition module is used for acquiring the complete playing rate of the current video when the user is detected to exit watching when the current video in the target video set is played;
and the playing control module is used for controlling to continuously play the current video or directly play the next target video set when the playing rate is larger than a third set threshold value and the user enters to watch again.
In the scheme, when a user watches any video in a video set generated based on state information, the situation of exiting watching occurs, and then the complete playing rate of the current video is automatically obtained; if the finishing rate is very high, determining whether the user is interested in the current video, reserving a playing node of the current video, and continuing to automatically play the current video when the user enters an application program for watching next time so as to continuously finish watching the interested video; or directly pushing the next target video set to the user, and sequentially playing a plurality of videos in the target video set generated in advance to the user so as to ensure orderly and reliable video recommendation.
In one embodiment, the apparatus further comprises:
the reason information analysis module is used for analyzing and obtaining the reason information of the user when the user is detected to quit watching when the current video in the target video set is played;
the updating module is used for updating the target video set based on the exit reason information and calling the video recommending module 34 to conduct video recommendation based on the updated target video set.
In the scheme, when a user watches any video in the video set generated based on the state information, the situation of exiting watching occurs, specific reasons of exiting the user can be automatically analyzed, if the user does not want to continue watching the video and actively exits, and the like, the user does not like the current video or the video of the same type as the current video according to specific exiting determination, the video in the generated target video set is updated, if the video in the target video set, which is similar to the current video type of exiting the user, is eliminated, and the like, so that the recommendation quality of the video in the final pushed target video set is optimized, and the video recommendation precision is ensured.
It should be noted that, in the video recommendation scheme of this embodiment, all operations related to collecting user associated data are performed on the premise of being allowed based on user authorization, so that reasonable and legal conditions are completed, and security of self information of a user is guaranteed while video recommendation is guaranteed, so that use experience of the user is further improved.
Example 3
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
Fig. 4 illustrates a schematic block diagram of an example electronic device 400 that may be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 4, the apparatus 400 includes a computing unit 401 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 402 or a computer program loaded from a storage unit 408 into a Random Access Memory (RAM) 403. In RAM 403, various programs and data required for the operation of device 400 may also be stored. The computing unit 401, ROM 402, and RAM 403 are connected to each other by a bus 404. An input/output (I/O) interface 405 is also connected to bus 404.
Various components in device 400 are connected to I/O interface 405, including: an input unit 406 such as a keyboard, a mouse, etc.; an output unit 407 such as various types of displays, speakers, and the like; a storage unit 408, such as a magnetic disk, optical disk, etc.; and a communication unit 409 such as a network card, modem, wireless communication transceiver, etc. The communication unit 409 allows the device 400 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
The computing unit 401 may be a variety of general purpose and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 401 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 401 performs the various methods and processes described above, such as the methods described above. For example, in some embodiments, the methods described above may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as the storage unit 408. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 400 via the ROM 402 and/or the communication unit 409. When a computer program is loaded into RAM 403 and executed by computing unit 401, one or more steps of the above-described method may be performed. Alternatively, in other embodiments, the computing unit 401 may be configured to perform the above-described methods by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuit systems, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server may be a cloud server, a server of a distributed system, or a server incorporating a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel or sequentially or in a different order, provided that the desired results of the technical solutions of the present disclosure are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (45)

1. A video recommendation method, the method comprising:
acquiring interaction association information between a user and user equipment when watching video;
responding to the interaction association information, and acquiring state information of the state of the user;
determining video viewing preference information matching the state in response to the state information;
and generating a target video set and recommending the target video set to the user in response to the video watching preference information.
2. The video recommendation method of claim 1, wherein the step of acquiring interactive association information between a user and a user device while watching the video comprises:
and acquiring the motion state of the user equipment when the user watches the video, and/or acquiring the interaction information of the user and the user equipment.
3. The video recommendation method of claim 2, wherein the interaction information includes at least one of the following:
and the user equipment acquires voice acquisition information and video acquisition information of the environment where the user is located.
4. The video recommendation method of claim 3, wherein the voice collection information and/or the video collection information of the environment where the user is located is obtained by triggering the user equipment to collect after the user feedback.
5. The video recommendation method of claim 3, wherein the step of acquiring state information of the state in which the user is located in response to the interactive related information comprises:
and analyzing and obtaining the state information of the state of the user according to the motion state and the interaction information.
6. The video recommendation method of claim 1, wherein the step of acquiring interactive association information between a user and a user device while watching the video comprises:
Acquiring actual input information input by the user to the user equipment when the user watches the video;
wherein the actual input information includes at least one of the following information:
voice input information, video input information, and text input information.
7. The video recommendation method of claim 6, wherein the step of acquiring state information of the state in which the user is located in response to the interaction related information comprises:
and analyzing and processing the actual input information to determine the state information of the state of the user.
8. The video recommendation method of claim 5, wherein the step of analyzing the state information of the state in which the user is located according to the motion state and the interaction information comprises:
and when the motion state representation is static and the interaction information representation has no interaction, analyzing to obtain that the user is in a use state of standing the user equipment and listening to audio.
9. The video recommendation method of claim 5, wherein the step of analyzing the state information of the state in which the user is located according to the motion state and the interaction information comprises:
And when the motion state representation is static and the interaction information representation has interaction, analyzing to obtain that the user is in a use state of standing the user equipment and watching video.
10. The video recommendation method of claim 5, wherein the step of analyzing the state information of the state in which the user is located according to the motion state and the interaction information comprises:
and when the motion state representation is accelerated in different directions and the interaction information representation has interaction, analyzing to obtain the use state that the user is in a state of holding the user equipment to watch the video.
11. The video recommendation method of claim 5, wherein the step of analyzing the state information of the state in which the user is located according to the motion state and the interaction information comprises:
and when the motion state representation is accelerated in different directions and the interaction information representation is not interacted, analyzing to obtain the use state that the user is in the motion state and in the audio listening state.
12. The video recommendation method of claim 5, wherein the step of analyzing the state information of the state in which the user is located according to the motion state and the interaction information comprises:
And when the motion state representation is accelerated in a single direction and the interaction information representation is not interacted, analyzing to obtain that the user is in a driving state.
13. The video recommendation method of claim 8, the step of determining video viewing preference information matching the in-place status in response to the status information comprising:
when the user is in a use state in which the user device is stationary and in audio listening, it is determined that the user's hearing demand is better than the visual demand.
14. The video recommendation method of claim 9, the step of determining video viewing preference information matching the in-place status in response to the status information comprising:
when the user is in a use state in which the user device is left standing and in video viewing, it is determined that the preference of the user's visual demand is higher than the auditory demand.
15. The video recommendation method of claim 10, the step of determining video viewing preference information matching the in-place status in response to the status information comprising:
when the user is in a use state of holding the user equipment to watch the video, determining that the preference of the visual requirement of the user is higher than the auditory requirement.
16. The video recommendation method of claim 11, the step of determining video viewing preference information matching the in-place status in response to the status information comprising:
when the user is in a use state in which the user device is held by the hand and is in audio listening, it is determined that the user's hearing demand is better than the visual demand.
17. The video recommendation method of claim 12, the step of determining video viewing preference information matching the in-place status in response to the status information comprising:
and when the user is in a driving state, determining that the preference of the hearing demand of the user is higher than the visual demand.
18. The video recommendation method of claim 12 or 17, further comprising, after the step of analyzing that the user is in a driving state:
and generating reminding information representing prompt safe driving when detecting that the user is in a driving state, interaction exists between the user and the user equipment and the interaction operation meets the preset condition.
19. The video recommendation method of claim 1, the method further comprising:
when the video watching preference information indicates that the preference of the hearing requirement of the user is higher than the visual requirement, reducing the distribution quantity of video advertisements to a first set threshold value;
And when the video watching preference information indicates that the preference of the visual requirement of the user is higher than the hearing requirement, increasing the distribution quantity of the video advertisements to a second set threshold value.
20. The video recommendation method of any one of claims 1-17, further comprising, after the step of generating and recommending a target video set to the user:
when the user is detected to exit watching when the current video in the target video set is played, acquiring the complete playing rate of the current video;
and when the finishing rate is larger than a third set threshold value and the user enters to watch again, controlling to continue playing the current video or directly next target video set.
21. The video recommendation method of any one of claims 1-17, further comprising, after the step of generating and recommending a target video set to the user:
when the user is detected to exit watching when the current video in the target video set is played, analyzing to obtain exit reason information of the user;
and updating the target video set based on the exit reason information, and performing video recommendation based on the updated target video set.
22. A video recommendation device, the device comprising:
The interactive association information acquisition module is used for acquiring interactive association information between the user and the user equipment when the user watches the video;
the state information acquisition module is used for responding to the interaction associated information and acquiring state information of the state of the user;
a viewing preference determining module for determining video viewing preference information matching the state in response to the state information;
and the video recommendation module is used for responding to the video watching preference information, generating a target video set and recommending the target video set to the user.
23. The video recommendation device of claim 22, wherein the interactive association information acquisition module is further configured to acquire a motion state of the user equipment when the user views the video, and/or interactive information of the user and the user equipment.
24. The video recommendation device of claim 23, wherein the interaction information comprises at least one of:
and the user equipment acquires voice acquisition information and video acquisition information of the environment where the user is located.
25. The video recommendation device of claim 24, wherein the voice capture information and/or the video capture information of the environment in which the user is located is triggered to capture by the user device after the user feedback.
26. The video recommendation device of claim 24, wherein the status information obtaining module is further configured to analyze the status information of the status of the user according to the motion status and the interaction information.
27. The video recommendation apparatus of claim 26, the interactive associated information acquisition module further configured to acquire actual input information entered by the user to the user device while the user is watching video;
wherein the actual input information includes at least one of the following information:
voice input information, video input information, and text input information.
28. The video recommendation device of claim 27, wherein the status information acquisition module is further configured to analyze the actual input information to determine the status information of the status in which the user is located.
29. The video recommendation device of claim 26, the status information acquisition module further for analyzing that the user is in use to rest the user device and to listen to audio when the motion state characterization is at rest and the interaction information characterization is not interacted.
30. The video recommendation device of claim 26, the status information acquisition module further to analyze that the user is in use to rest the user device and to view video when the motion state characterization is at rest and the interaction information characterization is interacted with.
31. The video recommendation device of claim 26, the status information acquisition module further configured to analyze a usage status of the user in holding the user device for video viewing when the motion status characterization is in acceleration in a different direction and the interaction information characterization has interactions.
32. The video recommendation device of claim 26, the status information acquisition module further for analyzing a use status of the user in motion and in audio listening when the motion status characterization is in acceleration in a different direction and the interaction information characterization is not interacted.
33. The video recommendation device of claim 26, wherein the status information acquisition module is further configured to analyze that the user is in a driving state when the motion state representation is accelerated in a single direction and the interaction information representation is not interacted.
34. The video recommendation device of claim 29, the viewing preference determination module further to determine that the user's auditory demand is preferred over visual demand when the user is in use with the user device stationary and in audio listening.
35. The video recommendation device of claim 30, the viewing preference determination module further to determine that the preference of the user's visual needs is above audible needs when the user is in a use state in which the user device is stationary and in video viewing.
36. The video recommendation device of claim 31, the viewing preference determination module further to determine that the preference of the user's visual needs is above audible needs when the user is in a use state of holding the user device in hand to view video.
37. The video recommendation device of claim 32, the viewing preference determination module further for determining that the user's auditory demand is preferred over visual demand when the user is in use with the user device in hand and in audio listening.
38. The video recommendation device of claim 33, the viewing preference determination module further for determining that a preference for auditory demand of the user is higher than visual demand when the user is in a driving state.
39. The video recommendation device of claim 33 or 38, the device further comprising:
and the safety reminding module is used for generating reminding information representing reminding of safe driving when detecting that the user is in a driving state, interaction exists between the user and the user equipment and the interaction operation meets the preset condition.
40. The video recommendation device of claim 22, the device further comprising:
A distribution number reducing module, configured to reduce the distribution number of video advertisements to a first set threshold when the video viewing preference information characterizes that the preference of the hearing demand of the user is higher than the visual demand;
and the distribution quantity increasing module is used for increasing the distribution quantity of the video advertisements to a second set threshold value when the preference of the video watching preference information representing the visual requirement of the user is higher than the hearing requirement.
41. The video recommendation device of any one of claims 22 to 38, further comprising:
the complete playing rate acquisition module is used for acquiring the complete playing rate of the current video when the user is detected to exit watching when the current video in the target video set is played;
and the playing control module is used for controlling to continuously play the current video or directly next target video set when the playing rate is larger than a third set threshold value and the user enters to watch again.
42. The video recommendation device of any one of claims 22 to 38, further comprising:
the reason information analysis module is used for analyzing and obtaining the reason information of the user when detecting that the user exits watching when the current video in the target video set is played;
And the updating module is used for updating the target video set based on the exit reason information and calling the video recommending module to conduct video recommendation based on the updated target video set.
43. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein, the liquid crystal display device comprises a liquid crystal display device,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-21.
44. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-21.
45. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-21.
CN202310370902.3A 2023-04-07 2023-04-07 Video recommendation method, device, electronic equipment and storage medium Active CN116405736B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310370902.3A CN116405736B (en) 2023-04-07 2023-04-07 Video recommendation method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310370902.3A CN116405736B (en) 2023-04-07 2023-04-07 Video recommendation method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116405736A true CN116405736A (en) 2023-07-07
CN116405736B CN116405736B (en) 2024-03-08

Family

ID=87017513

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310370902.3A Active CN116405736B (en) 2023-04-07 2023-04-07 Video recommendation method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116405736B (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1863396A (en) * 2006-02-22 2006-11-15 华为技术有限公司 Method for subscribing to and buying objects in mobile broadcasting multicast service
CN104376039A (en) * 2014-10-10 2015-02-25 安徽华米信息科技有限公司 Network content pushing method, device and system
CN105245956A (en) * 2015-09-30 2016-01-13 上海车音网络科技有限公司 Audio and video data recommendation method, device and system
CN106649582A (en) * 2016-11-17 2017-05-10 北京小米移动软件有限公司 Recommendation method and device
CN111026912A (en) * 2019-12-04 2020-04-17 广州市易杰数码科技有限公司 Collaborative recommendation method and device based on IPTV, computer equipment and storage medium
CN111726691A (en) * 2020-07-03 2020-09-29 北京字节跳动网络技术有限公司 Video recommendation method and device, electronic equipment and computer-readable storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1863396A (en) * 2006-02-22 2006-11-15 华为技术有限公司 Method for subscribing to and buying objects in mobile broadcasting multicast service
CN104376039A (en) * 2014-10-10 2015-02-25 安徽华米信息科技有限公司 Network content pushing method, device and system
US20160105520A1 (en) * 2014-10-10 2016-04-14 Anhui Huami Information Technology Co., Ltd. Method, apparatus, and system for pushing network content
CN105245956A (en) * 2015-09-30 2016-01-13 上海车音网络科技有限公司 Audio and video data recommendation method, device and system
CN106649582A (en) * 2016-11-17 2017-05-10 北京小米移动软件有限公司 Recommendation method and device
CN111026912A (en) * 2019-12-04 2020-04-17 广州市易杰数码科技有限公司 Collaborative recommendation method and device based on IPTV, computer equipment and storage medium
CN111726691A (en) * 2020-07-03 2020-09-29 北京字节跳动网络技术有限公司 Video recommendation method and device, electronic equipment and computer-readable storage medium

Also Published As

Publication number Publication date
CN116405736B (en) 2024-03-08

Similar Documents

Publication Publication Date Title
US10142279B2 (en) Method and system for presenting a listing of message logs
US11817099B2 (en) Systems, methods, and apparatuses for resuming dialog sessions via automated assistant
US10439918B1 (en) Routing messages to user devices
CN107992604B (en) Task item distribution method and related device
JP5881647B2 (en) Determination device, determination method, and determination program
US20230410815A1 (en) Transcription generation technique selection
US9894114B2 (en) Adjusting the display of social media updates to varying degrees of richness based on environmental conditions and importance of the update
US20220036427A1 (en) Method for managing immersion level and electronic device supporting same
CN116405736B (en) Video recommendation method, device, electronic equipment and storage medium
CN111835617B (en) User head portrait adjusting method and device and electronic equipment
CN114898755B (en) Voice processing method and related device, electronic equipment and storage medium
CN115118820A (en) Call processing method and device, computer equipment and storage medium
CN114666643A (en) Information display method and device, electronic equipment and storage medium
CN114461101A (en) Message selection method, device and equipment
US20140257791A1 (en) Apparatus and method for auto-generation of journal entries
CN113556649A (en) Broadcasting control method and device of intelligent sound box
CN111309230A (en) Information display method and device, electronic equipment and computer readable storage medium
CN115103237B (en) Video processing method, device, equipment and computer readable storage medium
CN112836127B (en) Method and device for recommending social users, storage medium and electronic equipment
CN112102821B (en) Data processing method, device, system and medium applied to electronic equipment
CN113722532A (en) Audio discussion guiding method and device and computer equipment
WO2024080970A1 (en) Emotion state monitoring
CN115221444A (en) Data processing method and device, electronic equipment and storage medium
WO2023244308A1 (en) Conference queue auto arrange for inclusion
CN114286120A (en) Live broadcast room sharing method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant