CN110225398B - Multimedia object playing method, device and equipment and computer storage medium - Google Patents

Multimedia object playing method, device and equipment and computer storage medium Download PDF

Info

Publication number
CN110225398B
CN110225398B CN201910452836.8A CN201910452836A CN110225398B CN 110225398 B CN110225398 B CN 110225398B CN 201910452836 A CN201910452836 A CN 201910452836A CN 110225398 B CN110225398 B CN 110225398B
Authority
CN
China
Prior art keywords
playing
user
information
video
sub
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910452836.8A
Other languages
Chinese (zh)
Other versions
CN110225398A (en
Inventor
俄万有
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201910452836.8A priority Critical patent/CN110225398B/en
Publication of CN110225398A publication Critical patent/CN110225398A/en
Application granted granted Critical
Publication of CN110225398B publication Critical patent/CN110225398B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/4508Management of client data or end-user data
    • H04N21/4532Management of client data or end-user data involving end-user characteristics, e.g. viewer profile, preferences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/472End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content
    • H04N21/47217End-user interface for requesting content, additional data or services; End-user interface for interacting with content, e.g. for content reservation or setting reminders, for requesting event notification, for manipulating displayed content for controlling playback functions for recorded or on-demand content, e.g. using progress bars, mode or play-point indicators or bookmarks

Landscapes

  • Engineering & Computer Science (AREA)
  • Databases & Information Systems (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The application discloses a multimedia object playing method, a multimedia object playing device, multimedia object playing equipment and a computer storage medium, which are used for controlling playing of a playing object according to user characteristic information and playing control information generated by object characteristic information so as to realize automatic playing control of the playing object. The method comprises the following steps: responding to an operation for indicating the selected playing object to be played in the object player, and acquiring playing resources and playing control information of the playing object selected by the operation; the playing control information is generated based on the user characteristic information of the login user in the object player and the object characteristic information of the playing object; and playing the playing object in the object player according to the playing resource and the playing control information.

Description

Multimedia object playing method, device and equipment and computer storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a method, an apparatus, and a device for playing a multimedia object and a computer storage medium.
Background
With the continuous development of internet technology, multimedia content is continuously abundant, and a large amount of music or videos are online every day. For example, for a video, the video contains content that is of interest to the user, and for the content of interest, the user often chooses to watch the content carefully, while there may be content that is not of interest to the user in the video, and for the content that is not of interest to the user, the user may choose to fast forward and skip the content. When the fast forward operation is performed, the user is required to manually drag the progress bar to adjust the played video content, however, the user usually does not know the subsequent video content, and therefore does not know where the position of the progress bar is properly adjusted, and generally the user needs to try many times to adjust the progress bar to the position where the user is satisfied, so that the operation is cumbersome, the video watching time of the user is wasted, and the user experience is poor.
Disclosure of Invention
The embodiment of the application provides a multimedia object playing method, a multimedia object playing device, multimedia object playing equipment and a computer storage medium, which are used for controlling playing of a playing object according to user characteristic information and playing control information generated by object characteristic information and realizing automatic playing control of the playing object.
In one aspect, a multimedia object playing method is provided, and the method includes:
responding to an operation for indicating the selected playing object to be played in the object player, and acquiring playing resources and playing control information of the playing object selected by the operation; the playing control information is generated based on the user characteristic information of the login user in the object player and the object characteristic information of the playing object;
and playing the playing object in the object player according to the playing resource and the playing control information.
In one aspect, a multimedia object playing apparatus is provided, the apparatus comprising:
the system comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for responding to an operation for indicating the selected playing object to be played in an object player and acquiring the playing resource and the playing control information of the playing object selected by the operation; the playing control information is generated based on the user characteristic information of the login user in the object player and the object characteristic information of the playing object;
and the execution unit is used for playing the playing object in the object player according to the playing resource and the playing control information.
Optionally, the obtaining unit is specifically configured to:
acquiring object characteristic information of the playing object and acquiring user characteristic information of the login user;
performing characteristic combination on the object sub-characteristics included in the object characteristic information and the login user sub-characteristics included in the user characteristic information to obtain at least one combined sub-characteristic;
acquiring at least one piece of sub-control information based on the mapping relation between the combined sub-features and the sub-control information;
and generating the playing control information according to the at least one piece of sub-control information.
Optionally, when the playing object is a video, the obtaining unit is specifically configured to:
constructing video tubular regions of at least one candidate object from the video, and extracting the environmental characteristics of the bounding box of each video tubular region;
classifying the corresponding candidate objects according to the environmental characteristics of the bounding box of each video tubular area to obtain the classification labels of the candidate objects;
and generating object feature information of the video according to the classification label of each candidate object and the serialization feature of each video tubular area.
Optionally, the user feature information includes one or a combination of at least two of the following:
extracting the value of first-class user characteristic information from the registration information of the login user;
obtaining the value of second-class user characteristic information by performing data analysis on the registration information and the historical operation data of the login user through a preset statistical rule;
and classifying the registration information of the login user and historical operation data through a pre-trained classification model to obtain the value of the third-class user characteristic information.
Optionally, the obtaining unit is specifically configured to:
acquiring state information of the object player; wherein the state information is used for indicating whether the object player starts an automatic playing control function or not;
and when the state information indicates that the object player starts an automatic playing control function, acquiring the playing resource and the playing control information of the playing object selected by the operation.
In one aspect, a computer device is provided, comprising a memory, a processor and a computer program stored on the memory and executable on the processor, the processor implementing the method of the above aspect when executing the program.
In one aspect, a computer-readable storage medium is provided that stores processor-executable instructions for performing the method of the above aspect.
In the embodiment of the application, when the object player obtains an operation that a user indicates to play a certain playing object, the object player obtains a playing resource of the playing object and playing control information generated based on user characteristic information of the user and object characteristic information of the playing object, so that when the object player plays the playing object, the object player can automatically control playing of the playing object according to the playing control information. Because the playing control information is generated based on the user characteristics of each user and the characteristics of the playing object, the habit and the preference of the user can be better met, the playing object is more accurately controlled, the user does not need to manually adjust the playing object, the operation of the user is simplified, and the user use experience is further improved.
Drawings
In order to more clearly illustrate the embodiments of the present application or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the description below are only the embodiments of the present application, and it is obvious for those skilled in the art that other drawings can be obtained according to the provided drawings without creative efforts.
Fig. 1 is a schematic diagram of a video playing control method in the prior art;
FIG. 2 is a diagram illustrating another video playback control method in the prior art;
fig. 3 is a schematic view of a scenario provided in an embodiment of the present application;
fig. 4 is a schematic view of another scenario provided in the embodiment of the present application;
FIG. 5 is a system architecture diagram according to an embodiment of the present application;
fig. 6 is a schematic flowchart of a multimedia object playing method according to an embodiment of the present application;
fig. 7 is a schematic display diagram of a video playing client according to an embodiment of the present application;
fig. 8 is a schematic display diagram of a player setting interface with an added automatic play control function switch according to an embodiment of the present application;
fig. 9 is a schematic architecture diagram of a video semantic understanding model provided in an embodiment of the present application;
FIG. 10 is a schematic view of a video tube region provided by an embodiment of the present application;
fig. 11 is a schematic structural diagram of a multimedia object playing apparatus according to an embodiment of the present application;
fig. 12 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the technical solutions in the embodiments of the present application will be described clearly and completely with reference to the accompanying drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application. In the present application, the embodiments and features of the embodiments may be arbitrarily combined with each other without conflict. Also, while a logical order is shown in the flow diagrams, in some cases, the steps shown or described may be performed in an order different than here.
For the convenience of understanding the technical solutions provided by the embodiments of the present application, some key terms used in the embodiments of the present application are explained first:
multimedia objects: the multimedia object is used for bearing multimedia content, the multimedia object can be played through a corresponding object player, and then the multimedia content borne by the multimedia object is displayed to a user, for example, the multimedia object can be a video, and the corresponding object player is a video player; or, the multimedia object may also be an audio, and the corresponding object player is an audio player; alternatively, the multimedia object may also be text or an image, etc.
The object player: the object player can be an independent object playing client, and also can be a playing component which is embedded in a webpage and used for playing the object.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in this document generally indicates that the preceding and following related objects are in an "or" relationship unless otherwise specified.
Currently, taking a multimedia object as a video as an example, when a user watches a video, the video includes a portion that the user wants to watch in detail, and may also include a portion that the user wants to fast forward in an overview. There are two main approaches to video fast forward:
first, the progress of video playing can be adjusted through manually adjusting the progress bar, thereby adjusting the video playing content, as shown in fig. 1, the video playing progress is located at the position a originally, the user can manually drag the progress bar, so as to adjust the playing progress to the position B, thereby adjusting the played video content, however, the user usually does not know the subsequent video content, and therefore does not know the position where the progress bar is adjusted to be more suitable, and generally the user needs to try many times to adjust the progress bar to the more satisfied position of the user, which not only is cumbersome to operate, but also wastes the video watching time of the user, and the user experience is poor.
And secondly, adjusting the video playing speed through a speed doubling function provided by the video player. As shown in fig. 2, the video player provides multiple speed options, such as "1.0X", "1.25X", "1.5X", and "2.0X", where "2.0X" indicates that the playing speed is 2 times of the original speed of the video, and the user can select from the speed options to adjust the playing speed of the video. However, after the user selects the double speed, and the double speed is not adjusted again, the video will be played continuously according to the speed, and for the place in the video where the user is interested, the high double speed easily causes the user to miss wonderful details, so the user often wants to play according to the normal speed, the user needs to manually adjust the double speed, the operation is cumbersome, and the user may have no time to evacuate to adjust the double speed when busy other things, which is inconvenient, and thus the user experience is poor.
In view of the above existing problems, no matter the progress is adjusted manually or the playing speed is adjusted manually, the user needs to set the video manually, so that the problem of complex operation exists, and therefore, the problem of complex operation needs to be solved, and then, the automatic control of video playing needs to be realized. In addition, the preference of each user is different, so that the control of video playing by different users is usually different, and therefore when the video is automatically controlled, the preference of the user and the content of the video can be combined, so that the video playing control is more in line with the preference of the user, and the user experience is improved. Likewise, the same is true for playback control of video, and for playback control of other multimedia objects.
In view of the above considerations, an embodiment of the present application provides a multimedia object playing method, in which when an object player obtains an operation that a user instructs to play a certain playing object, the object player obtains a playing resource of the playing object and playing control information generated based on user characteristic information of the user and object characteristic information of the playing object, so that when the object player plays the playing object, the object player can automatically control playing of the playing object according to the playing control information. Because the playing control information is generated based on the user characteristics of each user and the characteristics of the playing object, the habit and the preference of the user can be better met, the playing object is more accurately controlled, the user does not need to manually adjust the playing object, the operation of the user is simplified, and the user use experience is further improved.
In addition, considering that the user may also want to play the object in the existing manner, the embodiment of the present application further provides a switch button between the existing playing manner and the playing manner of the embodiment of the present application, and only when the user opens the automatic playing control function in the object player, the playing control is performed in the playing manner of the embodiment of the present application, otherwise, the object is played in the existing playing manner.
After introducing the design concept of the embodiment of the present application, some simple descriptions are provided below for application scenarios to which the technical solution of the embodiment of the present application can be applied, and it should be noted that the application scenarios described below are only used for describing the embodiment of the present application and are not limited. In a specific implementation process, the technical scheme provided by the embodiment of the application can be flexibly applied according to actual needs.
Please refer to fig. 3, which is a schematic view of a scenario applicable to the embodiment of the present invention, where the scenario may include a background server 101 and a plurality of terminals 102, that is, terminals 102-1 to terminals 102-M shown in fig. 1, where M is a positive integer, and a value of M is not limited in the embodiment of the present application.
The terminal 102 may specifically be a mobile phone, a Personal Computer (PC), a tablet computer, or the like, and the terminal 102 may include one or more processors 1021, a memory 1022, an I/O interface 1023 interacting with the backend server 101, a display panel 1024, and the like. An object player program may be installed in the terminal 102. The object player can be in the form of an independent client, and can also be in the form of a playing component nested in a webpage.
The memory 1022 of the terminal 102 may store program instructions of method steps required to be executed by the terminal in the embodiment of the present application, and when the program instructions are executed by the processor 1021, the program instructions can be used to implement the method steps required to be executed by the terminal in the embodiment of the present application, and display a corresponding display page on the display panel 1024.
The backend server 101 may include one or more processors 1011, memory 1012, and I/O interface 1013 to interact with the terminal, etc. The server 101 may further include a database 1014, and the database 1014 may be configured to store user characteristic information of each user, object characteristic information of each playback object, and the like. The background server 101 may be a background server corresponding to the object player, or a background server corresponding to a website including the object player. The memory 1012 of the server 101 may store program instructions of method steps required to be executed by the backend server 101 provided by the embodiment of the present application, and when the program instructions are executed by the processor 1011, the program instructions can be used to implement the method steps required to be executed by the backend server 101 provided by the embodiment of the present application.
In actual application, after selecting a playing object to be played in the terminal 102 through the object player or a website including the object player, the user calls the object player to respond and sends a playing request to the background server 101, and the background server 101 can return a playing resource and playing control information of the playing object to the terminal 102 according to the playing request, so that the object player plays the playing object based on the playing resource and the playing control information. The background server 101 may further perform user portrayal on each user to obtain user feature information of each user, perform object understanding on each playing object to obtain object feature information of each playing object, store the user feature information of each user and the object feature information of each playing object in the database, and when a subsequent user requests to play a certain playing object, respectively find the user feature information and the object feature information based on the user identifier and the object identifier, thereby generating playing control information based on the user feature information and the object feature information, and feeding the playing control information back to the terminal 102, so that the object player controls playing of the playing object based on the playing control information.
Background server 101 and terminal 102 may be communicatively coupled via one or more networks 103. The network 103 may be a wired network or a WIreless network, for example, the WIreless network may be a mobile cellular network, or may be a WIreless-Fidelity (WIFI) network, or may also be other possible networks, which is not limited in this embodiment of the present application.
Fig. 4 is a schematic diagram of another scenario to which the embodiment of the present invention is applicable, where the scenario may include a computer device 201.
Specifically, the computer device 201 has sufficient computing power for representing a user by a user to obtain user characteristic information and analyzing a playing object to obtain object playing information, so that when the computer device 201 obtains a request that the user indicates to play a certain playing object, playing control information can be generated based on the characteristic information of the user and the playing object, and the playing object is played based on the playing control information. In other words, the multimedia object playing method provided by the embodiment of the present application may be executed by the computer device 201.
In particular, the computer device 201 may include one or more processors 2011, a memory 2012, a display panel 2013, a database 2014, and the like. Of course, the computer device 201 may further include an input component such as a touch key and a physical key, and the user input component operates to instruct the object player to play the playing object indicated by the user. The memory 2012 of the computer device 201 may store therein program instructions of the multimedia object playing method provided in the embodiment of the present application, and when the program instructions are executed by the processor 2011, the program instructions can be used to implement the steps of the multimedia object playing method provided in the embodiment of the present application, so as to obtain a playing resource and playing control information of a playing object after obtaining an operation that a user instructs to play a certain playing object, so as to play the playing object based on the playing control information, thereby implementing an effect of automatically controlling the playing object.
Of course, the method provided in the embodiment of the present application is not limited to be used in the application scenarios shown in fig. 3 and fig. 4, and may also be used in other possible application scenarios, and the embodiment of the present application is not limited. The functions that can be implemented by each device in the application scenarios shown in fig. 3 and fig. 4 will be described together in the following method embodiments, and will not be described in detail herein.
Fig. 5 is a schematic diagram of another system architecture to which the embodiment of the present invention is applicable, in which the system architecture may include an object player 501, an access layer 502, a logic layer 503, and a data layer 504. The system architecture may be applied to the application scenario shown in fig. 3, then the object player 501 may be deployed on the terminal 102, and the access layer 502, the logic layer 503, and the data layer 504 may be deployed on the background server 101; alternatively, the system architecture may also be applied to the application scenario shown in fig. 4, and then the object player 501, the access layer 502, the logic layer 503, and the data layer 504 may all be deployed in the computer device 201.
The object player 501 is configured to provide a video playing function for a user, acquire an operation input by the user, and respond to the operation of the user.
The access layer 502 may be configured to perform data interaction with an object server, and the access layer 502 may include a data reporting interface 5021 and an object playing interface 5022. The data reporting interface 5021 is configured to receive a data reporting request initiated by the object player 501, such as a user login request, an attention request, and the like, check a format of the data request, and forward the data request to the logic layer 503 for processing after the data request passes the check. Wherein, the relevant information of the user, such as the viewing history of the user, the search record, the attention information, and the like, can be obtained through the data request of the user. The object playing interface 5022 is used to receive an object playing request from a user, and return information such as playing resource information and playing control information obtained from the logic layer 503 to the object player 501, so as to implement playing of a playing object.
The logical layer 503 may include a data reporting service module 5031 and an object playing service module 5032. The data reporting service module 5031 may obtain the data reporting request from the data reporting interface 5021, perform processing of data reporting logic, such as data storage format conversion, data completion, and data compression, and forward the processed data to the data layer 504 for storage. The object playing service module 5032 may obtain the object playing request from the object playing interface 5022, respectively query the corresponding user feature information, the corresponding object feature information, and the corresponding object playing address according to the user identifier and the object identifier, further generate corresponding playing control information for the user by combining the user feature information and the object feature information, and return the video playing control information and the playing address to the object player 501 through the object playing interface 5022.
Data layer 504 may include user data storage service module 5041, user representation service module 5042, user characteristics information storage service module 5043, play resources storage service module 5044, object analysis service module 5045, and object characteristics information storage service module 5046. The user data storage service module 5041 is configured to store user data, which includes basic information such as gender, age, and geographic location of the user, and operation information such as play history, search record, focus list, fast forward and fast backward operations of the user. The user profile service module 5042 is configured to generate user profile information of the user based on the stored basic information of the user and the stored operation information of the user, where the user profile information can represent information such as habits and preferences of the user, for example, a picture that the user likes star B and the user does not like a bloody fish, and store the information in the user profile information storage service module 5043. The playback resource storage service module 5044 is a service for storing an original playback object medium, for example, a storage medium for storing a video, such as a Content Delivery Network (CDN). The object analysis service module 5045 is configured to perform object understanding and analysis on each playing object stored by the playing resource storage service module 5044, so as to obtain object feature information of each playing object, and store the object feature information in the object information storage service module 5046, where the object information storage service module 5046 may store media asset information related to the playing object, such as information about a title, a category, an actor, a viewpoint, and a picture of the playing object, in addition to the object feature information.
Of course, the method provided in the embodiment of the present application is not limited to be used in the system architecture shown in fig. 5, and may also be used in other possible system architectures, and the embodiment of the present application is not limited thereto.
Referring to fig. 6, a flowchart of a multimedia object playing method provided in an embodiment of the present application is schematically shown, where the method may be executed by the background server and the terminal in fig. 3 in a matching manner, or may be executed by the computer device in fig. 4. The following specifically takes the application scenario shown in fig. 3 in combination with the system architecture shown in fig. 5 as an example to describe the flow of the method. Of course, for the application scenario shown in fig. 4, the method flows are also similar, so that reference may be made to the following description of the method flows, and redundant description is not repeated in the following.
Step 601: the object player acquires the operation information.
In the embodiment of the application, a user can select an operation object to be played in the played object overview page, and then the terminal can acquire the playing operation of the user, wherein the playing operation is used for indicating the selected playing object to be played in the object player, and sending operation information corresponding to the playing operation to the object player, so that the object player acquires the operation information of the user, and the object player can determine the playing object selected by the user based on the playing operation.
Exemplarily, taking an object player as a video playing client as an example, relevant information of a plurality of videos may be displayed on a display page of the video playing client, as shown in fig. 7, a front cover and a video introduction of the video may be displayed, a user may select a video that the user wants to play and operate a corresponding position to select the video that the user wants to play, after the user operates a certain video, as shown in fig. 7, when a terminal is a touch terminal, the user may select the video that the user wants to play by clicking the corresponding position of the video, and then the terminal may collect touch operation of the user and send operation information corresponding to the touch operation to the video player, so that the video player obtains operation information of the user and determines a playing object selected by the user.
When the object player is a video playing component in a video webpage, after a user operates a playing object, the terminal can also acquire the operation of the user and send the operation information to the video playing component through the browser, so that the video playing component acquires the operation information corresponding to the operation of the user and further determines the playing object selected by the user.
Step 602: the object player sends an object playing request to the background server, and an object playing interface of the background server receives the object playing request.
In the embodiment of the application, after the object player acquires the operation information of the user, a corresponding object playing request can be generated based on the operation information, and the playing object request can carry the user identifier of the user and the object identifier of the playing object. The user identifier may be login account information of a login user in the object player, such as a user account, a user nickname, or an Identity (ID) assigned to the account. The object identification may be an ID of the playing object, one ID uniquely identifying one playing object.
In practical application, a user may expect to play the playing object in the existing manner under certain circumstances, for example, the original media information of the playing object is directly played without automatic playing control, so as to meet various requirements of the user, a switch of an automatic playing control function may be further provided in the object player, and only when the automatic playing control function is turned on, the automatic playing control may be performed on the playing object, otherwise, the playing of the playing object may be performed in the existing manner. As shown in fig. 8, in order to provide a display interface of a player setting interface with an automatic playing control function switch, a user may select an "on" or "not on" option, and when "on" is selected, an automatic playing control function of the target player may be turned on, and when "not on" is selected, the automatic playing control function of the target player may be turned off.
When sending an object playing request to the background server, the state information of the object player may be carried, where the state information may include the state of the automatic playing control function switch, i.e., "on" or "not on", and of course, other state information may also be carried, e.g., whether to turn on a "pop-up screen" or not. In practical application, the state information may not be carried in the object playing request and sent together, but sent through a request message different from the object playing request.
In the embodiment of the present application, an object playing request or other request messages sent by an object player are all received by an interface of an access layer of a background server, for example, the object playing request may be received by an object playing interface.
Step 603: the object playing interface forwards the object playing request to the object playing service module, and the object playing service module receives the object playing request.
In the embodiment of the application, after the object playing interface receives the playing request, the validity of the request can be checked, so that the invalid request is reduced from occupying the computing resources of the background server, and the performance of the background server is improved. And after the verification is passed, the object playing interface forwards the object playing request to the object playing service module.
Step 604: the object playing service module sends an object information acquisition request to the object information storage service module, and the object information storage service module receives the object information acquisition request.
In this embodiment of the present application, after receiving the object playing request, the object playing service module may analyze the object playing request, acquire an object identifier carried in the object playing request, further generate an object information acquisition request based on the object identifier, and send the object information acquisition request to the object information storage service module, where the request is used to request to acquire media asset information and object feature information corresponding to the playing object, and certainly, other possible object related information may be included besides the information, which is not limited in this embodiment of the present application.
In the embodiment of the present invention, if the state information of the object player acquired after analyzing the object playing request does not start the automatic playing control function, the object playing service module may only be used to acquire the media asset information corresponding to the playing object in the object information acquisition request sent to the object information storage service module, and after acquiring the media asset information corresponding to the playing object, the feedback may be directly performed to the object player based on the media asset information without performing the subsequent steps to generate the playing control information.
Step 605: the object information storage service module returns an object information response message to the object playing service module, and the object playing service module receives the object information response message.
In the embodiment of the application, the object information storage service module stores related information of the playing object, such as media asset information and object feature information corresponding to the playing object, and when the object information storage service module receives the object information acquisition request, the object information storage service module can search according to the object identifier to find the object information requested by the object information acquisition request, and carry the object information in the object information response message and return the object information to the object playing service module, where the object information may include the media asset information and the object feature information corresponding to the playing object, for example.
Specifically, the object characteristic information stored by the object information storage service module may be acquired by a real-time request object understanding service module, that is, after the object information storage service module acquires the object identifier, the object information storage service module sends an object information acquisition request to the object understanding service module, and after the object understanding service module acquires the corresponding playing object from the playing resource storage service module based on the object identifier, the object understanding service module understands and analyzes the playing object, generates the object characteristic information, and sends the object characteristic information to the object information storage service module.
In order to improve the real-time performance of playing the playing object, the object feature information may be pre-stored in the object information storage service module, for example, after a new playing object is added to the playing resource storage service module, the object feature information is generated by using the object understanding service module and stored in the object information storage service module, so that when the object feature information of the playing object needs to be used subsequently, only the object identifier needs to be searched, and the response speed of the background server is further improved.
Specifically, the object understanding service module is relatively easy to understand the text or audio playing object, for example, for the text, only simple semantic understanding needs to be performed on the text to obtain the object feature information of the playing object. When the playing object is a video, semantic understanding can be performed on the video through a video semantic understanding model adopting a video semantic understanding algorithm, so that semantic description content of the video, namely video characteristic information of the video, can be obtained. The following describes a process of acquiring video feature information of a video by an object understanding service module, taking a video semantic understanding model as an example.
Fig. 9 is a schematic diagram of an architecture of a video semantic understanding model. The video semantic understanding model builds a hierarchical recurrent neural network to realize modeling of the time-space characteristics of a video, builds a candidate object tubular region to generate a network to quickly generate a possible object tubular region, extracts the environmental characteristics of a bounding box sequence by introducing an LSTM-based visual perception model to form integral description of pipeline characteristics, and then realizes detection of general objects in the video by classifying the pipelines.
Specifically, the video semantic understanding model may include a feature extraction module, an object detection module, a video perception module, and a video content understanding description module.
When video semantic analysis is required to be performed on a video, an original long video sequence of the video can be divided into a series of video segments with short duration, and then the series of short video segments are input into the feature extraction module. The feature extraction module can be a hierarchical recursive neural network based on a Long Short-Term Memory (LSTM) network, and extracts the feature vector of each video clip through the first layer of LSTM network, and then inputs the feature vector of each video clip into the next layer of LSTM network according to the time sequence of each video clip in the original video sequence, thereby generating the feature vector of the Long video sequence.
The object detection module is used for constructing a video tubular region of at least one candidate object from the video according to the feature vector of the long video sequence extracted by the feature extraction module, and classifying the corresponding candidate object based on the environmental features of the bounding box of each video tubular region to obtain the classification label of each candidate object. The video tubular area is a tubular area formed by connecting convenient outlines of the same object in multiple frames of images. For example, a video clip of a person running is taken, and in each still image, we can identify the boundary contour of the person, and by connecting the boundary contours of the person in time sequence, a tubular region is generated. Or as shown in fig. 10, it is a schematic view of a video tubular region, wherein for a video segment, the outline of the object a can be detected in each video frame, and the outline is connected in time sequence, so as to generate the tubular region shown in fig. 10.
The object detection module can comprise a visual perception model based on an LSTM network, the environmental characteristics of the boundary frame of the video tubular region can be extracted through the visual perception model, and then classification prediction of the tubular region is achieved through the LSTM network, namely, corresponding candidate objects are classified based on the environmental characteristics of the boundary frame of the video tubular region, and classification labels of the candidate objects are output.
The video perception module can realize the structural analysis of the video content through a video content sentence generation algorithm based on a deep learning model. Specifically, the video perception module may be a deep circular Neural Network (RNN), and the module may extract an object perception model and a corresponding description sentence in the video through offline training, so as to automatically generate a description of the video content by perceiving a sequence feature of a video tubular region of a candidate object in the video.
Finally, the video content understanding description module can realize the understanding of the video content by combining the classification labels of the candidate objects, the sequence characteristics of the candidate objects and the offline-trained object perception model output by the video object detection module.
Step 606: the object playing service module sends a user characteristic information acquisition request to the user characteristic information storage service module, and the user characteristic information storage service module receives the user characteristic information acquisition request.
In the embodiment of the application, after receiving the object playing request, the object playing service module may further obtain the user identifier carried in the object playing request by analyzing the object playing request, and then generate a user characteristic information obtaining request based on the user identifier, and send the user characteristic information obtaining request to the user characteristic information storage service module, where the request is used to request to obtain the user characteristic information corresponding to the login user in the object player.
Step 607: the user characteristic information storage service module returns a user characteristic information response message to the object playing service module, and the object playing service module receives the user characteristic information response message.
In the embodiment of the application, the user characteristic information storage service module stores the user characteristic information of each user, and when the user characteristic information storage service module receives a user characteristic information acquisition request, the user characteristic information storage service module can search according to the user identification to search the user characteristic information requested by the user characteristic information acquisition request, and carries the user characteristic information in the user characteristic information response message and returns the user characteristic information to the object playing service module.
Specifically, the user characteristic information stored by the user characteristic information storage service module may be obtained by a real-time request user portrait service module, that is, after the user characteristic information storage service module obtains a user identifier, a user characteristic information obtaining request is sent to the user portrait service module, and after the user portrait service module obtains corresponding user data from the user data storage service module based on the user identifier, the user characteristic information is generated based on the user data and sent to the user characteristic information storage service module.
Similarly, in order to improve the real-time performance of the playing object, the user feature information may be pre-stored in the user feature information storage service module, for example, the user portrait service module is periodically utilized to generate the user feature information based on the latest user data, and the user feature information is stored in the user feature information storage service module, so that when the user feature information needs to be used subsequently, only the user identification needs to be searched, and the response speed of the background server is further improved.
It should be noted that, the processes of step 606 to step 607 and the processes of step 604 to step 605 do not have a substantial sequence, and when the specific execution is performed, the processes of step 606 to step 607 and the processes of step 604 to step 605 may be performed simultaneously, or may be performed sequentially, for example, the processes of step 606 to step 607 are performed first, or the processes of step 604 to step 605 are performed first.
In the embodiment of the application, the user characteristic information can be understood as a label printed by a user, the types of the labels can include a plurality of types, and different types of labels can be obtained by different methods according to the characteristics of the labels. Specifically, the user feature information may include one or a combination of at least two of the following:
(1) and extracting the value of the first type of user characteristic information from the registration information of the login user. When a user registers, some self related information, such as age, gender, specialty and the like, can be filled in, the first type of user characteristic information is generally basic information of the user and can be obtained by cleaning the registration information of the user, that is, the first type of user characteristic information can be user information capable of directly obtaining values, and the first type of user characteristic information can directly obtain values of each characteristic parameter from the registration information of a login user. User characteristic information.
(2) And obtaining the value of the second type user characteristic information by performing data analysis on the registration information and the historical operation data of the login user through a preset statistical rule, wherein the second type user characteristic information cannot directly obtain the value, but certain data analysis can be performed on the basis of the registration information and the historical operation data of the login user, so that the corresponding value is obtained. For example, if the users are divided into age intervals, the users may be divided into the age intervals according to a certain rule, or the preference characteristics of the users are analyzed by combining the playing record information and attention information of the users, and a stable statistical rule is finally formed by presetting the statistical rule, and forming the preset statistical rule may be adjusted by multiple iterations.
(3) And classifying the registration information of the login user and the historical operation data through a pre-trained classification model to obtain the value of the third-class user characteristic information. And if the third type of user characteristic information is not well analyzed through the preset statistical rule, a classification model can be formed through a machine learning algorithm, and the user is classified by using the data of the user, so that the user characteristic information is obtained.
Step 608: and the object playing service module generates playing control information according to the user characteristic information and the object characteristic information.
In the embodiment of the application, after the user characteristic information and the object characteristic information are acquired, the object playing service module can associate the user characteristic information and the object characteristic information to generate playing control information matched with the user characteristic information and the object characteristic information.
Specifically, the user characteristic information may include a plurality of user sub-characteristics, for example, "user a likes star B", "user a is 9 years old", and "user a is female in gender", and the like, and the object characteristic information may also include a plurality of object sub-characteristics, for example, when the playing object is a video, the object sub-characteristics may be "video C includes a picture of star B" and "video C includes a bloody lens", and the like, the object playing service module may perform characteristic combination on the object sub-characteristics included in the object characteristic information and the user sub-characteristics included in the user characteristic information to obtain at least one combined sub-characteristic, thereby obtaining at least one piece of sub-control information based on a mapping relationship between the combined sub-characteristic and the sub-control information, and the at least one piece of sub-control information may constitute the playing control information. In other words, a feature mapping rule is predefined, the user feature information has a certain user sub-feature, and the object feature information includes another corresponding object sub-feature, so that the corresponding sub-control information can be found according to the feature mapping rule.
Illustratively, if the user sub-feature "the user likes star B" and the object sub-feature "video includes a combined sub-feature composed of pictures of star B", that is, there is associated information between the user feature information and the object feature information, that is, "star B", according to the actor matching rule, the picture of star B in the video is normally played without fast forwarding, and the rest of the sub-control information can be analogized, so that all the sub-control information is obtained to constitute the play control information.
Step 609: the object playing service module sends the playing address information and the playing control information of the playing object to the object playing interface, and the object playing interface receives the playing address information and the playing control information of the playing object.
Step 610: the object playing interface sends the playing address information and the playing control information of the playing object to the object player, and the object player receives the playing address information and the playing control information of the playing object.
In the embodiment of the application, the playing address information of the playing object can be acquired based on the media information, and then the object playing service module can send the playing address information and the playing control information of the playing object to the object player through the object playing interface.
Step 611: the object player sends a playing resource downloading request to the playing resource storage service module, and the playing resource storage service module receives the playing resource downloading request.
In the embodiment of the present application, after the target player obtains the play address information, a play resource downloading request may be sent to the play resource storage service module based on the play address information, so as to request to download the play resource.
Step 612: the playing resource storage service module transmits playing resources to the object player, and the object player receives the playing resources.
Step 613: and the object player plays the playing resources according to the playing control information.
In the embodiment of the application, after the object player receives the playing resources, the object player can play the received playing resources based on the control of the playing control information, so that the automatic playing control of the playing object is realized. For example, when the playing object is a video, the transmitted playing resource is a video medium, and the video player can play the received video medium according to the video playing control information, thereby implementing the playing of the video.
Referring to fig. 11, based on the same inventive concept, an embodiment of the present invention further provides a multimedia object playing apparatus 110, including:
an acquisition unit 1101 configured to acquire, in response to an operation for instructing playing of the selected playing object in the object player, a playing resource and playing control information of the playing object selected by the operation; the playing control information is generated based on the user characteristic information of the login user in the object player and the object characteristic information of the playing object;
an executing unit 1102, configured to perform playing of the playing object in the object player according to the playing resource and the playing control information.
Optionally, the obtaining unit 1101 is specifically configured to:
sending an object playing request to a background server of the object player, wherein the object playing request carries the user identifier of the login user and the object identifier of the playing object;
receiving an object playing response message returned by the background server, wherein the object playing response message comprises playing address information of the playing object and the playing control information;
and acquiring the playing resources of the playing object based on the playing address information.
Optionally, the obtaining unit 1101 is specifically configured to:
acquiring object characteristic information of a playing object and acquiring user characteristic information of a login user;
performing characteristic combination on the object sub-characteristics included in the object characteristic information and the login user sub-characteristics included in the user characteristic information to obtain at least one combined sub-characteristic;
acquiring at least one piece of sub-control information based on the mapping relation between the combined sub-features and the sub-control information;
and generating the playing control information according to the at least one piece of sub-control information.
Optionally, when the playing object is a video, the obtaining unit 1101 is specifically configured to:
constructing video tubular regions of at least one candidate object from the video, and extracting the environmental characteristics of the bounding box of each video tubular region;
classifying the corresponding candidate objects according to the environmental characteristics of the bounding box of each video tubular area to obtain the classification labels of the candidate objects;
and generating object characteristic information of the video according to the classification label of each candidate object and the serialization characteristic of each video tubular area.
Optionally, the user characteristic information includes one or a combination of at least two of the following:
extracting the value of first-class user characteristic information from the registration information of a login user;
obtaining the value of second-class user characteristic information by performing data analysis on the registration information and the historical operation data of the login user through a preset statistical rule;
and classifying the registration information of the login user and historical operation data through a pre-trained classification model to obtain the value of the third-class user characteristic information.
Optionally, the obtaining unit 1101 is specifically configured to:
acquiring state information of an object player; the state information is used for indicating whether the object player starts an automatic playing control function or not;
and when the state information indicates that the object player starts the automatic playing control function, acquiring the playing resource and the playing control information of the playing object selected by the operation.
The apparatus may be configured to execute the method shown in the embodiment shown in fig. 6, and therefore, for functions and the like that can be realized by each functional module of the apparatus, reference may be made to the description of the embodiment shown in fig. 6, which is not repeated here.
Referring to fig. 12, based on the same technical concept, an embodiment of the present invention further provides a computer device 120, which may include a memory 1201 and a processor 1202.
The memory 1201 is used for storing computer programs executed by the processor 1202. The memory 1201 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function, and the like; the storage data area may store data created according to use of the computer device, and the like. The processor 1202 may be a Central Processing Unit (CPU), a digital processing unit, or the like. The embodiment of the present invention does not limit the specific connection medium between the memory 1201 and the processor 1202. In fig. 12, the memory 1201 and the processor 1202 are connected by a bus 1203, the bus 1203 is shown by a thick line in fig. 12, and the connection manner between other components is only schematically illustrated and is not limited. The bus 1203 may be divided into an address bus, a data bus, a control bus, etc. For ease of illustration, only one thick line is shown in FIG. 12, but this is not intended to represent only one bus or type of bus.
Memory 1201 may be a volatile memory (volatile memory), such as a random-access memory (RAM); the memory 1201 may also be a non-volatile memory (non-volatile memory) such as, but not limited to, a read-only memory (rom), a flash memory (flash memory), a hard disk (HDD) or a solid-state drive (SSD), or any other medium which can be used to carry or store desired program code in the form of instructions or data structures and which can be accessed by a computer. The memory 1201 may be a combination of the above memories.
A processor 1202 for executing the method performed by the apparatus in the embodiment shown in fig. 6 when invoking the computer program stored in the memory 1201.
In some possible embodiments, various aspects of the methods provided by the present invention may also be implemented in the form of a program product including program code for causing a computer device to perform the steps of the methods according to various exemplary embodiments of the present invention described above in this specification when the program product is run on the computer device, for example, the computer device may perform the methods performed by the devices in the embodiment shown in fig. 6.
The program product may employ any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.

Claims (9)

1. A method for playing a multimedia object, the method comprising:
acquiring object characteristic information of a playing object and user characteristic information of a login user in the object player, wherein the user characteristic information comprises different types of characteristic labels corresponding to the login user;
performing characteristic combination on the object sub-characteristics included in the object characteristic information and the login user sub-characteristics included in the user characteristic information to obtain at least one combined sub-characteristic; wherein, there is incidence relation between object sub-feature and login user sub-feature in each combination sub-feature;
acquiring at least one piece of sub-control information based on the mapping relation between the combined sub-features and the sub-control information;
generating play control information according to the at least one piece of sub-control information;
responding to an operation for indicating the selected playing object to be played in an object player, and acquiring playing resources and playing control information of the playing object selected by the operation;
and playing the playing object in the object player according to the playing resource and the playing control information.
2. The method according to claim 1, wherein the obtaining of the playing resources and the playing control information of the playing object selected by the operation comprises:
sending an object playing request to a background server of the object player, wherein the object playing request carries the user identifier of the login user and the object identifier of the playing object;
receiving an object playing response message returned by the background server, wherein the object playing response message comprises playing address information of the playing object and the playing control information;
and acquiring the playing resources of the playing object based on the playing address information.
3. The method according to claim 1, wherein when the playback object is a video, the acquiring object feature information of the playback object includes:
constructing video tubular regions of at least one candidate object from the video, and extracting the environmental characteristics of the bounding box of each video tubular region;
classifying the corresponding candidate objects according to the environmental characteristics of the bounding box of each video tubular area to obtain the classification labels of the candidate objects;
and generating object feature information of the video according to the classification label of each candidate object and the serialization feature of each video tubular area.
4. The method of claim 1, wherein the user characteristic information comprises one or a combination of at least two of:
extracting the value of first-class user characteristic information from the registration information of the login user;
obtaining the value of second-class user characteristic information by performing data analysis on the registration information and the historical operation data of the login user through a preset statistical rule;
and classifying the registration information of the login user and historical operation data through a pre-trained classification model to obtain the value of the third-class user characteristic information.
5. The method according to claim 1 or 2, wherein the obtaining of the playing resources and the playing control information of the playing object selected by the operation comprises:
acquiring state information of the object player; wherein the state information is used for indicating whether the object player starts an automatic playing control function or not;
and when the state information indicates that the object player starts an automatic playing control function, acquiring the playing resource and the playing control information of the playing object selected by the operation.
6. A multimedia object playback apparatus, characterized in that the apparatus comprises:
the device comprises an acquisition unit, a display unit and a display unit, wherein the acquisition unit is used for acquiring object characteristic information of a playing object and acquiring user characteristic information of a login user in an object player, and the user characteristic information comprises different types of characteristic tags corresponding to the login user; performing characteristic combination on the object sub-characteristics included in the object characteristic information and the login user sub-characteristics included in the user characteristic information to obtain at least one combined sub-characteristic; wherein, there is incidence relation between object sub-feature and login user sub-feature in each combination sub-feature; acquiring at least one piece of sub-control information based on the mapping relation between the combined sub-features and the sub-control information; generating play control information according to the at least one piece of sub-control information; responding to an operation for indicating the selected playing object to be played in an object player, and acquiring playing resources and playing control information of the playing object selected by the operation;
and the execution unit is used for playing the playing object in the object player according to the playing resource and the playing control information.
7. The apparatus of claim 6, wherein the obtaining unit is specifically configured to:
sending an object playing request to a background server of the object player, wherein the object playing request carries the user identifier of the login user and the object identifier of the playing object;
receiving an object playing response message returned by the background server, wherein the object playing response message comprises playing address information of the playing object and the playing control information;
and acquiring the playing resources of the playing object based on the playing address information.
8. A computer device comprising a memory, a processor and a computer program stored on the memory and executable on the processor, wherein the processor implements the method of any of claims 1 to 5 when executing the computer program.
9. A computer-readable storage medium having stored thereon processor-executable instructions for performing the method of any of claims 1-5.
CN201910452836.8A 2019-05-28 2019-05-28 Multimedia object playing method, device and equipment and computer storage medium Active CN110225398B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910452836.8A CN110225398B (en) 2019-05-28 2019-05-28 Multimedia object playing method, device and equipment and computer storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910452836.8A CN110225398B (en) 2019-05-28 2019-05-28 Multimedia object playing method, device and equipment and computer storage medium

Publications (2)

Publication Number Publication Date
CN110225398A CN110225398A (en) 2019-09-10
CN110225398B true CN110225398B (en) 2022-08-02

Family

ID=67818340

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910452836.8A Active CN110225398B (en) 2019-05-28 2019-05-28 Multimedia object playing method, device and equipment and computer storage medium

Country Status (1)

Country Link
CN (1) CN110225398B (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112770182B (en) * 2019-11-05 2022-07-29 腾讯科技(深圳)有限公司 Video playing control method, device, equipment and storage medium
CN110730387B (en) * 2019-11-13 2022-12-06 腾讯科技(深圳)有限公司 Video playing control method and device, storage medium and electronic device
CN111432245B (en) * 2020-03-27 2021-07-13 腾讯科技(深圳)有限公司 Multimedia information playing control method, device, equipment and storage medium
CN111654752B (en) * 2020-06-28 2024-03-26 腾讯科技(深圳)有限公司 Multimedia information playing method and device, electronic equipment and storage medium
CN113242468B (en) * 2021-05-11 2021-11-23 深圳市逸马科技有限公司 Big data cloud platform-based education data flow control method and system
CN115396728A (en) * 2022-08-18 2022-11-25 维沃移动通信有限公司 Method and device for determining video playing multiple speed, electronic equipment and medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025675A1 (en) * 2008-09-03 2010-03-11 华为技术有限公司 Method for playing service contents, system and apparatus thereof
CN107801096A (en) * 2017-10-30 2018-03-13 广东欧珀移动通信有限公司 Control method, device, terminal device and the storage medium of video playback
CN107948732A (en) * 2017-12-04 2018-04-20 京东方科技集团股份有限公司 Playback method, video play device and the system of video
CN108197989A (en) * 2017-12-29 2018-06-22 深圳正品创想科技有限公司 A kind of video ads playing method and device
CN108377422A (en) * 2018-02-24 2018-08-07 腾讯科技(深圳)有限公司 A kind of control method for playing back of multimedia content, device and storage medium
CN108810625A (en) * 2018-06-07 2018-11-13 腾讯科技(深圳)有限公司 A kind of control method for playing back of multi-medium data, device and terminal
CN109587568A (en) * 2018-11-01 2019-04-05 北京奇艺世纪科技有限公司 Video broadcasting method, device, computer readable storage medium

Family Cites Families (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9807465B2 (en) * 2016-02-29 2017-10-31 Rovi Guides, Inc. Systems and methods for transmitting a portion of a media asset containing an object to a first user
CN106897742B (en) * 2017-02-21 2020-10-27 北京市商汤科技开发有限公司 Method and device for detecting object in video and electronic equipment
CN107454475A (en) * 2017-07-28 2017-12-08 珠海市魅族科技有限公司 Control method and device, computer installation and the readable storage medium storing program for executing of video playback
CN107995523B (en) * 2017-12-21 2019-09-03 Oppo广东移动通信有限公司 Video broadcasting method, device, terminal and storage medium
CN108259988B (en) * 2017-12-26 2021-05-18 努比亚技术有限公司 Video playing control method, terminal and computer readable storage medium
CN108184169B (en) * 2017-12-28 2020-10-09 Oppo广东移动通信有限公司 Video playing method and device, storage medium and electronic equipment
CN108174248B (en) * 2018-01-25 2020-01-03 腾讯科技(深圳)有限公司 Video playing method, video playing control device and storage medium
CN109688469B (en) * 2018-12-27 2021-05-28 北京爱奇艺科技有限公司 Advertisement display method and device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2010025675A1 (en) * 2008-09-03 2010-03-11 华为技术有限公司 Method for playing service contents, system and apparatus thereof
CN107801096A (en) * 2017-10-30 2018-03-13 广东欧珀移动通信有限公司 Control method, device, terminal device and the storage medium of video playback
CN107948732A (en) * 2017-12-04 2018-04-20 京东方科技集团股份有限公司 Playback method, video play device and the system of video
CN108197989A (en) * 2017-12-29 2018-06-22 深圳正品创想科技有限公司 A kind of video ads playing method and device
CN108377422A (en) * 2018-02-24 2018-08-07 腾讯科技(深圳)有限公司 A kind of control method for playing back of multimedia content, device and storage medium
CN108810625A (en) * 2018-06-07 2018-11-13 腾讯科技(深圳)有限公司 A kind of control method for playing back of multi-medium data, device and terminal
CN109587568A (en) * 2018-11-01 2019-04-05 北京奇艺世纪科技有限公司 Video broadcasting method, device, computer readable storage medium

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"流媒体播放任务调度与资源配置";蒋运韫;《中国博士学位论文全文数据库》;20171115;全文 *

Also Published As

Publication number Publication date
CN110225398A (en) 2019-09-10

Similar Documents

Publication Publication Date Title
CN110225398B (en) Multimedia object playing method, device and equipment and computer storage medium
US10924800B2 (en) Computerized system and method for automatically detecting and rendering highlights from streaming videos
CN109086439B (en) Information recommendation method and device
CN111144937B (en) Advertisement material determining method, device, equipment and storage medium
CN111178970B (en) Advertisement putting method and device, electronic equipment and computer readable storage medium
US10740394B2 (en) Machine-in-the-loop, image-to-video computer vision bootstrapping
CN112930669B (en) Content recommendation method and device, mobile terminal and server
JP2018530847A (en) Video information processing for advertisement distribution
CN110741331A (en) System, method and apparatus for image response automated assistant
US11087182B1 (en) Image processing including streaming image output
CN110728370B (en) Training sample generation method and device, server and storage medium
RU2714594C1 (en) Method and system for determining parameter relevance for content items
CN112241327A (en) Shared information processing method and device, storage medium and electronic equipment
US11868422B2 (en) Dynamic link preview generation
CN116821475B (en) Video recommendation method and device based on client data and computer equipment
CN108470057B (en) Generating and pushing method, device, terminal, server and medium of integrated information
US20230377331A1 (en) Media annotation with product source linking
US20220164090A1 (en) Abstract generation method and apparatus
KR102492022B1 (en) Method, Apparatus and System of managing contents in Multi-channel Network
CN114298767A (en) Live broadcast platform information pushing method and device, equipment, medium and product thereof
CN113158094A (en) Information sharing method and device and electronic equipment
US20220083614A1 (en) Method for training a machine learning algorithm (mla) to generate a predicted collaborative embedding for a digital item
CN117651165B (en) Video recommendation method and device based on client data
US20240144677A1 (en) Mobile application camera activation and de-activation based on physical object location
KR20240002089A (en) Method, apparatus and system of providing contents service in multi-channel network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant