WO2022188618A1 - Resource preloading method, apparatus and device, and storage medium - Google Patents

Resource preloading method, apparatus and device, and storage medium Download PDF

Info

Publication number
WO2022188618A1
WO2022188618A1 PCT/CN2022/077202 CN2022077202W WO2022188618A1 WO 2022188618 A1 WO2022188618 A1 WO 2022188618A1 CN 2022077202 W CN2022077202 W CN 2022077202W WO 2022188618 A1 WO2022188618 A1 WO 2022188618A1
Authority
WO
WIPO (PCT)
Prior art keywords
resource
playback
unplayed
current
user type
Prior art date
Application number
PCT/CN2022/077202
Other languages
French (fr)
Chinese (zh)
Inventor
杨典
严冰
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2022188618A1 publication Critical patent/WO2022188618A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/231Content storage operation, e.g. caching movies for short term storage, replicating data over plural servers, prioritizing data for deletion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/433Content storage operation, e.g. storage operation in response to a pause request, caching operations

Definitions

  • the present disclosure relates to the technical field of streaming media processing, for example, to a resource preloading method, apparatus, device and storage medium.
  • the short video business has ushered in explosive growth.
  • the current video is played and several videos to be watched in the future are loaded in advance.
  • the user slides to the next video they can quickly see the first frame of the video, which reduces the duration of the first frame of video playback, and also The freezing rate during playback is reduced, which can greatly improve the user's viewing experience.
  • the present disclosure provides a resource preloading method, apparatus, device, and storage medium, so as to provide users with a dynamic preloading scheme, save unnecessary traffic waste, and improve user experience.
  • an embodiment of the present disclosure provides a resource preloading method, including:
  • the unplayed resource For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
  • an embodiment of the present disclosure further provides a resource preloading apparatus, including:
  • the expected playback volume determination module is set to determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
  • the preloading module is configured to, for each unplayed resource, preload the unplayed resource based on the expected playing amount.
  • an embodiment of the present disclosure further provides a resource preloading device, including:
  • processors one or more processors
  • memory arranged to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors implement the resource preloading method according to any one of the embodiments of the present disclosure.
  • an embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, any one of the embodiments of the present disclosure is implemented The resource preloading method.
  • FIG. 1 is an example diagram of an application scenario provided by an embodiment of the present disclosure
  • FIG. 2 is a flowchart of a resource preloading method provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of a resource preloading method provided by an embodiment of the present disclosure
  • FIG. 4 is a structural diagram of a prediction model provided by an embodiment of the present disclosure.
  • FIG. 5 is a flowchart of a video preloading provided by an embodiment of the present disclosure.
  • FIG. 6 is a structural diagram of a resource preloading apparatus provided by an embodiment of the present disclosure.
  • FIG. 7 is a structural diagram of a resource preloading device provided by an embodiment of the present disclosure.
  • method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
  • the term “including” and variations thereof are open-ended inclusions, ie, "including but not limited to”.
  • the term “based on” is “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • the information flow in the embodiment of the present disclosure is a continuously updated information flow, which can push information in a Really Simple Syndication (RSS) to a user.
  • RSS Really Simple Syndication
  • the “feed” in this embodiment of the present disclosure may be a content aggregator formed by combining multiple news sources actively subscribed by the user to help the user continuously obtain the latest feed content.
  • the feed is used in RSS to receive the information source. interface.
  • timeline-based presentation form timeline is the most typical feed stream presentation method, which is updated according to the feed stream content.
  • rank is to calculate the weight of the feed stream content according to some factors, so as to determine the sequence of the feed stream content display.
  • the client preloads all videos indiscriminately, that is, all videos are preloaded with the same amount of playback. But different users have different needs for preloading. Therefore, in the playback scenario of complex short video feed streams, indiscriminate video preloading can easily lead to wasted traffic and degraded user experience.
  • the current resource is played and several resources to be watched in the future are loaded in advance.
  • the user slides to the next resource, he can quickly see the first frame of the resource, which reduces the duration of the first frame of resource playback and reduces the It can greatly improve the user's viewing experience by reducing the freezing rate during playback.
  • the commonly used preloading method is: the client preloads all videos indiscriminately, that is, all videos are preloaded with the same amount of playback.
  • different users have different requirements for preloading. For example, some users will selectively skip part of the video. Excessive preloading of this part of the video results in wasted traffic cost and download time, and takes up preloading.
  • the embodiments of the present disclosure provide a resource preloading method, apparatus, device, and storage medium, which dynamically provide users with a personalized preloading scheme by intelligently predicting the playback amount of unplayed resources, saving time and money. Necessary traffic is wasted to improve user experience.
  • FIG. 1 is an example diagram of an application scenario provided by an embodiment of the present disclosure.
  • the client 101 is configured to execute the resource preloading method described in any embodiment of the present disclosure, and the client 101 can send resource preloading information to the server 102 through the network, and preload the resources provided by the server 102 in advance , and play the preloaded assets through the output device.
  • the above-mentioned output device may be an output device built into the client 101, such as a touch screen, etc.; it may also be an external output device connected to the client 101 through a communication line, such as a projector, a digital TV ( television, TV), etc.
  • the client 101 is described by taking a computer device as an example, and the computer device may be a computer device including a processor, a memory, an input device, and an output device.
  • the resource preloading method provided by the embodiments of the present disclosure may also be applied to other smart devices having the same functions as computer devices.
  • the embodiments of the present disclosure describe, but not limit, the application scenarios and application devices of the above resource preloading.
  • the client in this embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer (Portable Android Device, PAD), Portable Multimedia Player (PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
  • PDA Personal Digital Assistant
  • PAD Portable Multimedia Player
  • mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), etc.
  • stationary terminals such as digital TVs, desktop computers, and the like.
  • a typical scenario to which the embodiments of the present disclosure are applied is a feed media stream, that is, when a user watches the current resource, he can switch to the next resource by sliding down, or switch to the upper resource by sliding up. a resource.
  • this embodiment is applied to another typical scenario where a mobile phone window displays multiple resources, and the user can browse different resources by scrolling up and down, and click to view different resources.
  • FIG. 2 is a flowchart of a method for resource preloading provided by an embodiment of the present disclosure. This embodiment is applicable to the case of dynamically preloading video resources in a feed stream.
  • the method may be executed by a resource preloading device.
  • the apparatus can be implemented by means of software and/or hardware.
  • the resource preloading method is applied to the client.
  • a feed stream resource playback application may be installed in the client, and the feed stream resource playback application may be used to play the resource.
  • a resource preloading device may be added to the feed stream resource playback application, which is used to execute any of the resource preloading methods provided in the embodiments of the present disclosure.
  • the resource preloading method provided by this embodiment includes steps S11 and S12.
  • the information flow can be a list of resources that can be continuously slid down and loaded continuously, also known as feed flow, and each feed entry is an independent resource.
  • the resource may be audio, video, picture, text, actionable card, any combination of any of the above two or more, and the like.
  • the current feed stream refers to the feed stream that the server has delivered to the client and the client is playing.
  • the current feed stream mainly includes broadcast resources, current resources and unplayed resources. Played resources refer to resources that have been played by the client and have been watched by the user; current resources refer to the resources that are being played by the client and are being watched by the user; unplayed resources refer to the resources that have not yet been played by the client.
  • the expected play volume may be the predicted play volume of resources that the user may want to watch.
  • the expected playing amount may be the expected playing number of bytes, for example, the expected playing amount is 2 megabytes (Megabytes, M).
  • the expected playback volume may also be the expected playback duration, for example, the expected playback volume is 10 seconds.
  • the expected playback volume may also be expressed in other metrics, which are not limited in this application.
  • the expected playback volume of each unplayed resource can be understood as each unplayed resource has its corresponding expected playback volume, and the expected playback volume corresponding to each unplayed resource may be the same or different. In this embodiment, no limitation is imposed.
  • a prediction model is obtained through pre-training, and features such as playback-related historical information, user type, video type, etc. are input into the prediction model, and the prediction model outputs the expected playback volume of each unplayed resource in the current feed stream .
  • a prediction model is obtained through pre-training, and features such as playback-related historical information, user types, and video types are input into the prediction model, and the prediction model outputs the expected playback ratio of each unplayed resource in the current feed stream. ; Determine the expected playback amount based on the expected playback ratio. For each unplayed resource, the product of the expected play ratio and the unplayed resource amount may be determined as the expected play amount of the unplayed resource.
  • the output results of the prediction models are different, it is only the model parameters set in the model training process that are different.
  • the output results when using the prediction model are consistent with the output results of its training.
  • a method for training a prediction model includes the following steps: training a neural network model using features such as a user's previous viewing behavior and operation, user type, and video type to obtain a prediction model.
  • the preloading of the unplayed resources based on the expected playback volume includes: determining preloading configuration information based on the expected playback volume, and preloading the unplayed resources based on the preloading configuration information.
  • the size of the preloaded resources when it is predicted that the playback duration of the user's subsequent resources is very short, the size of the preloaded resources can be selectively reduced, for example, the size of 300 (Kilobyte, KB) is sufficient, thus achieving the purpose of saving traffic. Or when it is predicted that the user does not play the resource for a long time, the size of the preloaded resource can be selectively increased, for example, 2 megabytes (Megabytes, MB), so as to achieve the purpose of reducing the freeze during the playback process.
  • the size of the preloaded resources can be selectively reduced, for example, the size of 300 (Kilobyte, KB) is sufficient, thus achieving the purpose of saving traffic.
  • the size of the preloaded resource can be selectively increased, for example, 2 megabytes (Megabytes, MB), so as to achieve the purpose of reducing the freeze during the playback process.
  • the preloading order of each unplayed resource can be determined according to the order of unplayed resources in the feed stream.
  • the priority corresponding to each unplayed resource may also be calculated, and the preloading order of the unplayed resources may be determined based on the priority. In this embodiment, only the preloading sequence of the unplayed resources is described, but not limited.
  • An embodiment of the present disclosure provides a resource preloading method, which includes: determining an estimated play amount of each unplayed resource in a current information stream based on a prediction model; and for each unplayed resource, preloading the unplayed resource based on the predicted play amount.
  • FIG. 3 is a flowchart of a resource preloading method provided by the embodiment of the present disclosure. As shown in FIG.
  • the resource preloading method mainly includes the following steps.
  • the currently playing resource may be understood as the resource currently being played on the current display screen of the client. Loading complete means that all the content of the currently playing resource has been cached to the client.
  • the detection of whether the loading of the current playback resource is completed may be to detect whether an identifier of the completion of the loading of the current playback resource is received, and the identifier may be generated by the client itself or sent by the server.
  • the detection of whether the loading of the current playback resource is completed may also be to detect whether the loaded byte length of the current playback resource is equal to the total byte length of the current playback resource.
  • the detection of whether the loading of the current playback resource is completed may also be to detect whether the loaded duration of the current playback resource is equal to the total duration of the current playback resource.
  • the current playback resource Before it is detected that the current resource is loaded, it is detected that the user switches to a new resource, that is, the current playback resource is updated, and the switched resource is used as the current playback resource, and an operation of detecting whether the current playback resource is loaded is performed.
  • the resource preloading operation is performed only after the current playback resource is downloaded. In this way, the stuck phenomenon of the current playback resource can be avoided, and the user experience can be improved.
  • the switching instruction may be a user's click instruction on the client terminal, a sliding instruction, an upward sliding instruction, a left sliding instruction, or a right sliding instruction, and the like.
  • the playback resource information includes one or more of the following: publisher information of the playback resource, name of the playback resource, Internet Protocol (IP) address of the playback resource, data packets of the playback resource, and the like.
  • IP Internet Protocol
  • the current user When the current user switches to the new resource, it may be detected that the user inputs a switching instruction, or it may be detected that the player has changed the playback resource information.
  • the new resource is used as the current playback resource, and the operation of detecting whether the current playback resource is loaded is completed.
  • downloading the user's current playback resource has the highest priority, and whenever the user switches to a new resource for playback, the preloading operation is performed first after the current playback resource is downloaded. In this way, the stuck phenomenon of the current playback resource can be avoided, and the user experience can be improved.
  • the embodiments of the present disclosure provide two methods for determining the expected playback amount of each unplayed resource in the current information stream.
  • determining the expected playback volume of each unplayed resource in the current information stream based on the prediction model includes: determining the current user type based on the prediction model; Relationships determine the expected amount of playback for each unplayed asset in the current stream.
  • the user type may be understood as a type corresponding to a user who plays a specified type of resource for a preset duration.
  • the corresponding relationship between the user type and the playback volume is searched based on the current user type, and the playback volume corresponding to the current user type is determined as the expected playback volume.
  • determining the expected playback volume of each unplayed resource in the current information stream based on the prediction model includes: determining the current user type based on the prediction model; The relationship determines the expected playback ratio of each unplayed resource in the current information stream; for each unplayed resource, the expected playback amount is determined based on the expected playback ratio.
  • the corresponding relationship between the user type and the playback ratio is searched based on the current user type, and the playback ratio corresponding to the current user type is determined.
  • the playback ratio can be understood as the ratio between the played duration of a type of resource and the total duration of the resource.
  • the product of the total duration of the unplayed resources and the playback ratio is used as the expected playback duration of the unplayed resource; or, after the playback ratio is obtained, the total duration of the unplayed resources
  • the product of the number of bytes and the playback ratio is used as the expected number of playback bytes of the unplayed resource.
  • Determining the current user type based on the prediction model includes: acquiring playback-related historical information and resource information, wherein the resource information includes the playback duration of each played resource, each type of unplayed resource, and the duration of each unplayed resource; The play-related historical information and resource information are input into the prediction model to obtain the current user type.
  • the resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource;
  • the playback-related historical information refers to the relevant information obtained when the client plays the historical resource, for example: history The number of likes of resources, comments of historical resources, etc.
  • the play-related historical information may also include user-provided preference information.
  • the current user type is obtained by inputting the playback-related historical information and resource information into the prediction model, which avoids redundant user type determination steps, can conveniently and quickly determine the current user type, and improves the running speed of the device.
  • the user type probability is the probability that the current user belongs to one type of user; the user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
  • the resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource;
  • the playback-related historical information refers to the relevant information obtained when the client plays the historical resource, for example: history The number of likes of resources, comments of historical resources, etc.
  • the play-related historical information may also include user-provided preference information.
  • the prediction model outputs multiple user type probabilities, compares the multiple user type probabilities, or sorts them to obtain the maximum user type probability, and determines the user type corresponding to the maximum user type probability as the current user type.
  • the current user type is determined according to multiple user type probabilities output by the prediction model, which can improve the accuracy of user type determination.
  • FIG. 4 is a schematic structural diagram of a prediction model provided by an embodiment of the present disclosure.
  • the prediction model mainly includes an input layer, n intermediate layers and an output layer, and the characteristic parameters received by the input layer mainly include: , the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource, etc.
  • the output layer outputs the probability that the user belongs to the nth category of video.
  • n is any value between 1, 2...k, where k is the total number of all user types.
  • the probability values output by the prediction model are compared, and the user type corresponding to the maximum probability is determined as the current user type.
  • the technical solution of inputting multi-dimensional features into a pre-trained neural network model to obtain the estimated playback duration of the unplayed resources in the current feed stream under the current user's behavior mode can improve the accuracy of determining the estimated playback duration.
  • the unplayed resource is an unplayed video resource as an example for description.
  • FIG. 5 is a flowchart of a video preloading provided by an embodiment of the present disclosure.
  • the video currently being played by the user is preferentially downloaded.
  • the next video will be regarded as the currently playing video, and the user's currently playing video will be downloaded first.
  • the playback-related historical information and video information are fed into the trained prediction model to predict the expected playback volume.
  • Select the preloading configuration according to the prediction result of the prediction model and start the video preloading.
  • the next video will be regarded as the currently playing video, and the user's currently playing video will be downloaded first. Or, until the video preload ends.
  • downloading the currently playing video of the user has the highest priority, and whenever the user switches to a new video for playback, the preloading operation is performed after the currently playing video is downloaded first.
  • the video preloading method provided by the embodiments of the present disclosure is more personalized.
  • the preloading in the related art loads the subsequent video with a size of 1MB indiscriminately.
  • the video preloading method provided by the embodiment of the present disclosure when it is predicted that the user will play a video with a very short duration, it is only necessary to load the subsequent video with a size of 300KB. The size is enough to save traffic, or when the user predicts that the subsequent video playback will be long, you can selectively increase the preload size, such as 2MB, to reduce the freeze during playback.
  • FIG. 6 is a structural diagram of a resource preloading apparatus provided by an embodiment of the present disclosure. This embodiment is applicable to the case of dynamically preloading video resources in a feed stream, and the resource preloading apparatus can be preloaded through software and/or hardware. way to achieve.
  • the resource preloading method is applied to the client.
  • the resource preloading apparatus mainly includes an expected playback amount determination module 61 and a preloading module 62 .
  • the expected playback volume determination module 61 is configured to determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
  • the preloading module 62 is configured to, for each unplayed resource, preload the unplayed resource based on the expected playing amount.
  • An apparatus for resource preloading provided by an embodiment of the present disclosure is configured to perform the following operations: determine the expected playback volume of each unplayed resource in a current information stream based on a prediction model; for each unplayed resource, perform the following operations on the Preload the broadcast resources.
  • a personalized preloading solution is dynamically provided to users, which saves unnecessary traffic waste and improves user experience.
  • the estimated playback amount determination module 61 is configured to detect whether the current playback resource is loaded; after detecting that the current playback resource is loaded, it executes a prediction model to determine each unplayed resource in the current information stream. Estimated playback steps.
  • the apparatus includes a switch detection module, configured to detect whether the current user switches to a new resource while preloading the unplayed resource based on the expected playback amount;
  • the expected playback amount determination module 61 is configured to use the new resource as the current playback resource after the current user switches to the new resource, and perform the step of detecting whether the current playback resource is loaded.
  • the expected playback amount determination module 61 includes:
  • a user type determination unit set to determine the current user type based on the prediction model
  • the expected playback amount determination unit is configured to determine the expected playback amount of each unplayed resource in the current information stream based on the current user type and the pre-stored correspondence between the user type and the playback amount.
  • the expected playback amount determination module 61 includes:
  • a user type determination unit set to determine the current user type based on the prediction model
  • the expected playback ratio unit is set to determine the expected playback ratio of each unplayed resource in the current information stream based on the corresponding relationship between the current user type and the pre-stored user type and playback ratio;
  • the expected playback amount determination unit is configured to determine the expected playback amount based on the expected playback ratio for each unplayed resource.
  • the user type determination unit is configured to obtain play-related historical information and resource information; input the play-related historical information and resource information into the prediction model to obtain the current user type, wherein the The resource information includes the playing duration of each played resource, the type of each unplayed resource, and the duration of each unplayed resource.
  • inputting the play-related historical information and resource information into the prediction model to obtain the current user type including:
  • the user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
  • the resource preloading apparatus provided in this embodiment can execute the resource preloading method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the resource preloading method.
  • FIG. 7 it shows a schematic structural diagram of a resource preloading device (eg, a terminal device or a server in FIG. 7 ) 700 suitable for implementing an embodiment of the present disclosure.
  • Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs, PADs, PMPs, in-vehicle terminals (eg, in-vehicle navigation terminals), etc., as well as mobile terminals such as digital TV, desktop Stationary terminals for computers, etc.
  • the electronic device shown in FIG. 7 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
  • the electronic device 700 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 701, which may be based on a program stored in a read-only memory (Read-only Memory, ROM) 702 or from a storage device 708 programs loaded into Random Access Memory (RAM) 703 to perform various appropriate actions and processes.
  • a processing device eg, a central processing unit, a graphics processor, etc.
  • RAM Random Access Memory
  • various programs and data necessary for the operation of the electronic device 700 are also stored.
  • the processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704.
  • An Input/Output (I/O) interface 705 is also connected to the bus 704 .
  • I/O interface 705 The following devices may be connected to the I/O interface 705: Input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD), speakers output device 707 , vibrator, etc.; storage device 708 including, for example, magnetic tape, hard disk, etc.; and communication device 709 . Communication means 709 may allow electronic device 700 to communicate wirelessly or by wire with other devices to exchange data.
  • FIG. 7 shows an electronic device 700 having various means, it is not required to implement or have all of the illustrated means. More or fewer devices may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via the communication device 709, or from the storage device 708, or from the ROM 702.
  • the processing device 701 the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
  • the computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two.
  • the computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above.
  • Computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Flash memory, optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device .
  • the program code embodied on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above.
  • the storage medium may be a non-transitory storage medium.
  • the client and server can use any currently known or future developed network protocols such as HyperText Transfer Protocol (HTTP) to communicate, and can communicate with digital data in any form or medium.
  • HTTP HyperText Transfer Protocol
  • Data communications eg, communications networks
  • Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
  • the unplayed resource For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
  • Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server.
  • the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions.
  • the functions noted in the blocks may occur out of the order noted in the figures.
  • two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
  • Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in special purpose hardware-based systems that perform the specified functions or operations, or special purpose hardware implemented in combination with computer instructions.
  • the units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself in one case.
  • exemplary types of hardware logic components include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSP Application Specific Standard Products
  • SOC System on Chip
  • complex programmable logic device Complex Programmable Logic Device, CPLD
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • Machine-readable storage media include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM, flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or the above any suitable combination of content.
  • a resource preloading method, apparatus, device and medium including:
  • the unplayed resource For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
  • a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
  • the step of determining the expected playback amount of each unplayed resource in the current information stream based on the prediction model is performed.
  • a resource preloading method, apparatus, device, and medium While preloading the unplayed resource based on the expected playback amount, the method further includes:
  • the new resource is used as the current playback resource, and the step of detecting whether the loading of the current playback resource is completed is performed.
  • a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
  • the expected playing volume of each unplayed resource in the current information stream is determined.
  • a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
  • the predicted play amount is determined based on the predicted play ratio.
  • a resource preloading method, apparatus, device, and medium are provided, and the current user type is determined based on a prediction model, including:
  • the resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource;
  • the play-related historical information and resource information are input into the prediction model to obtain the current user type.
  • a resource preloading method, apparatus, device, and medium are provided, and the playback-related historical information and resource information are input into the prediction model to obtain the current user type, including:
  • the user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Information Transfer Between Computers (AREA)

Abstract

The present disclosure provides a resource preloading method, apparatus and device, and a storage medium. The resource preloading method comprises: determining, on the basis of a prediction model, a predicted playing amount of each unplayed resource in the current information stream; and for each unplayed resource, preloading the unplayed resource on the basis of the predicted playing amount.

Description

资源预加载方法、装置、设备和存储介质Resource preloading method, apparatus, device and storage medium
本申请要求在2021年03月12日提交中国专利局、申请号为202110269706.8的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。This application claims the priority of a Chinese patent application with application number 202110269706.8 filed with the China Patent Office on March 12, 2021, the entire contents of which are incorporated herein by reference.
技术领域technical field
本公开涉及流媒体处理技术领域,例如涉及一种资源预加载方法、装置、设备和存储介质。The present disclosure relates to the technical field of streaming media processing, for example, to a resource preloading method, apparatus, device and storage medium.
背景技术Background technique
随着移动互联网的发展和智能终端的普及,短视频业务迎来爆发式的增长。在feed流视频播放场景下,播放当前视频的同时提前加载未来要观看的几个视频,用户滑动到下一个视频时,能够快速地看到视频首帧,降低了视频播放的首帧时长,也降低了播放过程中的卡顿率,可以极大地提高用户的观看体验。With the development of the mobile Internet and the popularization of smart terminals, the short video business has ushered in explosive growth. In the feed stream video playback scenario, the current video is played and several videos to be watched in the future are loaded in advance. When the user slides to the next video, they can quickly see the first frame of the video, which reduces the duration of the first frame of video playback, and also The freezing rate during playback is reduced, which can greatly improve the user's viewing experience.
发明内容SUMMARY OF THE INVENTION
本公开提供一种资源预加载方法、装置、设备和存储介质,实现为用户提供动态的预加载方案,节省不必要的流量浪费,提升用户体验。The present disclosure provides a resource preloading method, apparatus, device, and storage medium, so as to provide users with a dynamic preloading scheme, save unnecessary traffic waste, and improve user experience.
第一方面,本公开实施例提供了一种资源预加载方法,包括:In a first aspect, an embodiment of the present disclosure provides a resource preloading method, including:
基于预测模型确定当前信息流中每个未播资源的预计播放量;Determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
第二方面,本公开实施例还提供了一种资源预加载装置,包括:In a second aspect, an embodiment of the present disclosure further provides a resource preloading apparatus, including:
预计播放量确定模块,设置为基于预测模型确定当前信息流中每个未播资源的预计播放量;The expected playback volume determination module is set to determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
预加载模块,设置为针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。The preloading module is configured to, for each unplayed resource, preload the unplayed resource based on the expected playing amount.
第三方面,本公开实施例还提供了一种资源预加载设备,包括:In a third aspect, an embodiment of the present disclosure further provides a resource preloading device, including:
一个或多个处理器;one or more processors;
存储器,设置为存储一个或多个程序;memory, arranged to store one or more programs;
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多 个处理器实现如本公开实施例中任一项所述的资源预加载方法。When the one or more programs are executed by the one or more processors, the one or more processors implement the resource preloading method according to any one of the embodiments of the present disclosure.
第四方面,本公开实施例还提供了一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如本公开实施例中任一项所述的资源预加载方法。In a fourth aspect, an embodiment of the present disclosure further provides a computer-readable storage medium, where the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, any one of the embodiments of the present disclosure is implemented The resource preloading method.
附图说明Description of drawings
图l是本公开实施例提供的应用场景示例图;FIG. 1 is an example diagram of an application scenario provided by an embodiment of the present disclosure;
图2是本公开实施例提供的一种资源预加载方法的流程图;FIG. 2 is a flowchart of a resource preloading method provided by an embodiment of the present disclosure;
图3是本公开实施例提供的一种资源预加载方法的流程图;3 is a flowchart of a resource preloading method provided by an embodiment of the present disclosure;
图4是本公开实施例提供的一种预测模型的结构图;4 is a structural diagram of a prediction model provided by an embodiment of the present disclosure;
图5是本公开实施例提供的一种视频预加载的流程图;5 is a flowchart of a video preloading provided by an embodiment of the present disclosure;
图6是本公开实施例提供的一种资源预加载装置的结构图;FIG. 6 is a structural diagram of a resource preloading apparatus provided by an embodiment of the present disclosure;
图7是本公开实施例提供的一种资源预加载设备的结构图。FIG. 7 is a structural diagram of a resource preloading device provided by an embodiment of the present disclosure.
具体实施方式Detailed ways
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的一些实施例,本公开可以通过多种形式来实现,而且不应该被解释为限于这里阐述的实施例。本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。Embodiments of the present disclosure will be described below with reference to the accompanying drawings. Although some embodiments of the present disclosure are shown in the drawings, the present disclosure may be embodied in various forms and should not be construed as limited to the embodiments set forth herein. The drawings and embodiments of the present disclosure are only for exemplary purposes, and are not intended to limit the protection scope of the present disclosure.
本公开的方法实施方式中记载的多个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。The multiple steps described in the method embodiments of the present disclosure may be performed in different orders and/or in parallel. Furthermore, method embodiments may include additional steps and/or omit performing the illustrated steps. The scope of the present disclosure is not limited in this regard.
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。As used herein, the term "including" and variations thereof are open-ended inclusions, ie, "including but not limited to". The term "based on" is "based at least in part on." The term "one embodiment" means "at least one embodiment"; the term "another embodiment" means "at least one additional embodiment"; the term "some embodiments" means "at least some embodiments". Relevant definitions of other terms will be given in the description below.
本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。Concepts such as "first" and "second" mentioned in the present disclosure are only used to distinguish different devices, modules or units, and are not used to limit the order or interdependence of functions performed by these devices, modules or units relation.
本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,除非在上下文另有指出,否则应该理解为“一个或多个”。Modifications of "a" and "a plurality" mentioned in the present disclosure are illustrative and not restrictive, and should be read as "one or more" unless the context dictates otherwise.
本公开实施方式中的多个装置之间所交互的消息或者信息的名称仅用于说明性的目的,而并不是用于对这些消息或信息的范围进行限制。The names of messages or information exchanged between multiple devices in the embodiments of the present disclosure are only for illustrative purposes, and are not intended to limit the scope of these messages or information.
对本公开中出现的词语进行解释。The words appearing in this disclosure are explained.
本公开实施例中的信息流,又称为feed流,是一种持续更新的信息流,可以将简易信息聚合(Really Simple Syndication,RSS)中的信息推送给用户。The information flow in the embodiment of the present disclosure, also referred to as a feed flow, is a continuously updated information flow, which can push information in a Really Simple Syndication (RSS) to a user.
本公开实施例中的“feed”可以是将用户主动订阅的多个消息源组合在一起形成内容聚合器,帮助用户持续地获取最新的订阅源内容,feed是RSS中用来接收该信息来源的接口。The “feed” in this embodiment of the present disclosure may be a content aggregator formed by combining multiple news sources actively subscribed by the user to help the user continuously obtain the latest feed content. The feed is used in RSS to receive the information source. interface.
其中,feed流的展现形式有很多种,包括但不限于:基于时间线的展现形式timeline、基于智能排序的展现形式rank,其中,timeline是最典型的feed流展示方式,按照feed流内容更新的时间先后顺序,将内容展示给用户;rank是按照一些因素计算feed流内容的权重,从而决定feed流内容展示的先后顺序。Among them, there are many forms of presentation of feed streams, including but not limited to: timeline-based presentation form timeline, smart sorting-based presentation form rank, among which timeline is the most typical feed stream presentation method, which is updated according to the feed stream content. Time sequence, the content is displayed to the user; rank is to calculate the weight of the feed stream content according to some factors, so as to determine the sequence of the feed stream content display.
在短视频feed流的播放场景下,常用的预加载方法是:客户端对所有视频进行无差别的预加载,即所有的视频预加载相同的播放量。但不同用户对预加载的需求是变化的。因此,在复杂的短视频feed流的播放场景下,无差别的视频预加载容易带来流量的浪费和用户体验的下降。In the playback scenario of short video feed streams, the commonly used preloading method is: the client preloads all videos indiscriminately, that is, all videos are preloaded with the same amount of playback. But different users have different needs for preloading. Therefore, in the playback scenario of complex short video feed streams, indiscriminate video preloading can easily lead to wasted traffic and degraded user experience.
在feed流播放场景下,播放当前资源的同时提前加载未来要观看的几个资源,用户滑动到下个资源时,能够快速地看到资源首帧,降低了资源播放的首帧时长,也降低了播放过程中的卡顿率,可以极大地提高用户的观看体验。在feed流的播放场景下,常用的预加载方法是:客户端对所有视频进行无差别的预加载,即所有的视频预加载相同的播放量。但不同用户对预加载的需求是变化的,例如:有些用户会选择性地跳过部分视频,对这部分视频过多的预加载,造成了流量成本和下载时间的浪费,同时占用了预加载其他视频的时间;有时预加载的缓存不够大,用户的网络波动时,容易引起视频卡顿;用户观看视频的行为也可能会随网络状况、推荐效果、用户个人当前的状态发生变化。因此,在复杂的短视频feed流的播放场景下,无差别的视频预加载容易带来流量的浪费和用户体验的下降。In the feed stream playback scenario, the current resource is played and several resources to be watched in the future are loaded in advance. When the user slides to the next resource, he can quickly see the first frame of the resource, which reduces the duration of the first frame of resource playback and reduces the It can greatly improve the user's viewing experience by reducing the freezing rate during playback. In the playback scenario of feed stream, the commonly used preloading method is: the client preloads all videos indiscriminately, that is, all videos are preloaded with the same amount of playback. However, different users have different requirements for preloading. For example, some users will selectively skip part of the video. Excessive preloading of this part of the video results in wasted traffic cost and download time, and takes up preloading. The time of other videos; sometimes the preloaded cache is not large enough, and when the user's network fluctuates, it is easy to cause video freezes; the user's behavior of watching videos may also change with network conditions, recommendation effects, and the user's current state. Therefore, in the playback scenario of complex short video feed streams, indiscriminate video preloading can easily lead to wasted traffic and degraded user experience.
为了解决上述问题,本公开实施例提供了一种资源预加载方法、装置、设备和存储介质,通过智能预测未播放资源的播放量,动态地为用户提供个性化的预加载方案,节省了不必要的流量浪费,提升用户体验。In order to solve the above problems, the embodiments of the present disclosure provide a resource preloading method, apparatus, device, and storage medium, which dynamically provide users with a personalized preloading scheme by intelligently predicting the playback amount of unplayed resources, saving time and money. Necessary traffic is wasted to improve user experience.
图l是本公开实施例提供的应用场景示例图。如图l所示,客户端101用于 执行本公开任一实施例所述的资源预加载方法,客户端101可以通过网络将资源预加载信息发送至服务器102,提前预加载服务器102提供的资源,并通过输出装置将预加载资源进行播放。FIG. 1 is an example diagram of an application scenario provided by an embodiment of the present disclosure. As shown in FIG. 1, the client 101 is configured to execute the resource preloading method described in any embodiment of the present disclosure, and the client 101 can send resource preloading information to the server 102 through the network, and preload the resources provided by the server 102 in advance , and play the preloaded assets through the output device.
其中,上述输出装置可以是内置于客户端101中的输出设备,如:触摸显示屏等;也可以是通过通讯线与客户端101进行连接的外置输出设备,如:投影仪、数字电视(television,TV)等。Wherein, the above-mentioned output device may be an output device built into the client 101, such as a touch screen, etc.; it may also be an external output device connected to the client 101 through a communication line, such as a projector, a digital TV ( television, TV), etc.
在本实施例中,客户端101以计算机设备为例进行说明,计算机设备可以是包括了处理器、存储器、输入装置和输出装置的计算机设备。In this embodiment, the client 101 is described by taking a computer device as an example, and the computer device may be a computer device including a processor, a memory, an input device, and an output device.
本公开实施例提供的资源预加载方法也可应用于其他与计算机设备具有相同功能的智能设备,本公开实施例对上述资源预加载的应用场景及应用设备进行说明,而非限定。The resource preloading method provided by the embodiments of the present disclosure may also be applied to other smart devices having the same functions as computer devices. The embodiments of the present disclosure describe, but not limit, the application scenarios and application devices of the above resource preloading.
可选的,本公开实施例中的客户端可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(Portable Android Device,PAD)、便携式多媒体播放器(Portable Multimedia Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。Optionally, the client in this embodiment of the present disclosure may include, but is not limited to, such as a mobile phone, a notebook computer, a digital broadcast receiver, a personal digital assistant (Personal Digital Assistant, PDA), a tablet computer (Portable Android Device, PAD), Portable Multimedia Player (PMP), mobile terminals such as in-vehicle terminals (such as in-vehicle navigation terminals), etc., and stationary terminals such as digital TVs, desktop computers, and the like.
可选的,本公开实施例应用于的一个典型的场景为feed媒体流,所述feed媒体流也就是用户在观看当前资源时,可以通过下滑切换到下一个资源,或者通过上滑切换到上一个资源。Optionally, a typical scenario to which the embodiments of the present disclosure are applied is a feed media stream, that is, when a user watches the current resource, he can switch to the next resource by sliding down, or switch to the upper resource by sliding up. a resource.
可选的,本实施例应用于另一种典型的场景为手机窗口展示多个资源,用户可以通过上下滑浏览不同资源,并点击观看不同资源。Optionally, this embodiment is applied to another typical scenario where a mobile phone window displays multiple resources, and the user can browse different resources by scrolling up and down, and click to view different resources.
下面结合实施例,对本公开实施例提供的资源预加载方法、装置、设备和存储介质进行介绍。The following describes the resource preloading method, apparatus, device, and storage medium provided by the embodiments of the present disclosure with reference to the embodiments.
图2是本公开实施例提供的一种资源预加载方法的流程图,本实施例可适用于动态预加载feed流中的视频资源的情况,该方法可以由资源预加载装置来执行,所述装置可以通过软件和/或硬件的方式来实现。所述资源预加载方法应用于客户端中。FIG. 2 is a flowchart of a method for resource preloading provided by an embodiment of the present disclosure. This embodiment is applicable to the case of dynamically preloading video resources in a feed stream. The method may be executed by a resource preloading device. The apparatus can be implemented by means of software and/or hardware. The resource preloading method is applied to the client.
在本实施例中,上述客户端中可以安装有feed流资源播放应用程序,可以利用feed流资源播放应用程序播放资源。示例性的,可以在feed流资源播放应用程序中增加资源预加载装置,用于执行本公开实施例中提供的任一资源预加载方法。In this embodiment, a feed stream resource playback application may be installed in the client, and the feed stream resource playback application may be used to play the resource. Exemplarily, a resource preloading device may be added to the feed stream resource playback application, which is used to execute any of the resource preloading methods provided in the embodiments of the present disclosure.
如图2所示,本实施例提供的资源预加载方法包括步骤S11和S12。As shown in FIG. 2 , the resource preloading method provided by this embodiment includes steps S11 and S12.
S11、基于预测模型确定当前信息流中每个未播资源的预计播放量。S11. Determine, based on the prediction model, the expected playback amount of each unplayed resource in the current information stream.
其中,信息流可以是一列可以不断向下滑动不断加载的资源列表,又称为feed流,每个feed条目都是一个独立的资源。其中,所述资源可以是音频、视频、图片、文字、可操作卡片及任意上述两种以上的组合等。Among them, the information flow can be a list of resources that can be continuously slid down and loaded continuously, also known as feed flow, and each feed entry is an independent resource. Wherein, the resource may be audio, video, picture, text, actionable card, any combination of any of the above two or more, and the like.
当前feed流是指服务端已经下发到用户端,用户端正在执行播放的feed流。当前feed流中主要包括已播资源,当前资源和未播资源。已播资源是指用户端已经播放,用户已经观看过的资源;当前资源是指用户端正在播放,用户正在观看的资源;未播资源是指用户端还没有播放的资源。The current feed stream refers to the feed stream that the server has delivered to the client and the client is playing. The current feed stream mainly includes broadcast resources, current resources and unplayed resources. Played resources refer to resources that have been played by the client and have been watched by the user; current resources refer to the resources that are being played by the client and are being watched by the user; unplayed resources refer to the resources that have not yet been played by the client.
其中,预计播放量可以是预测用户可能想要观看的资源播放量。预计播放量可以是预计播放字节数,例如,预计播放量是2兆字节(Megabytes,M)。预计播放量也可以是预计播放时长,例如:预计播放量是10秒。预计播放量还可以以其它度量方式进行表示,本申请对此不做限定。The expected play volume may be the predicted play volume of resources that the user may want to watch. The expected playing amount may be the expected playing number of bytes, for example, the expected playing amount is 2 megabytes (Megabytes, M). The expected playback volume may also be the expected playback duration, for example, the expected playback volume is 10 seconds. The expected playback volume may also be expressed in other metrics, which are not limited in this application.
每个未播资源的预计播放量可以理解为每个未播资源有其对应的预计播放量,每个未播资源对应的预计播放量可以相同,也可以不同。本实施例中,不进行限定。The expected playback volume of each unplayed resource can be understood as each unplayed resource has its corresponding expected playback volume, and the expected playback volume corresponding to each unplayed resource may be the same or different. In this embodiment, no limitation is imposed.
在一个实施方式中,通过预训练的方式得到预测模型,将播放相关的历史信息、用户类型、视频类型等特征输入至预测模型,预测模型输出当前feed流中每个未播资源的预计播放量。In one embodiment, a prediction model is obtained through pre-training, and features such as playback-related historical information, user type, video type, etc. are input into the prediction model, and the prediction model outputs the expected playback volume of each unplayed resource in the current feed stream .
在一个实施方式中,通过预训练的方式得到预测模型,将播放相关的历史信息、用户类型、视频类型等特征输入至预测模型,预测模型输出当前feed流中每个未播资源的预计播放比例;基于预计播放比例确定预计播放量。可以是针对每个未播资源,将预计播放比例与该未播资源量之积确定为该未播资源的预计播放量。In one embodiment, a prediction model is obtained through pre-training, and features such as playback-related historical information, user types, and video types are input into the prediction model, and the prediction model outputs the expected playback ratio of each unplayed resource in the current feed stream. ; Determine the expected playback amount based on the expected playback ratio. For each unplayed resource, the product of the expected play ratio and the unplayed resource amount may be determined as the expected play amount of the unplayed resource.
由于上述两个实施方式中,预测模型的输出结果不相同,其仅是模型训练过程中设置的模型参数不相同。使用预测模型时的输出结果和其训练的输出结果一致即可。Since in the above two embodiments, the output results of the prediction models are different, it is only the model parameters set in the model training process that are different. The output results when using the prediction model are consistent with the output results of its training.
在本公开实施例中,提供一种预测模型的训练方法,包括如下步骤:利用用户先前的观看行为和操作、用户类型、视频类型等特征训练神经网络模型,得到预测模型。In an embodiment of the present disclosure, a method for training a prediction model is provided, which includes the following steps: training a neural network model using features such as a user's previous viewing behavior and operation, user type, and video type to obtain a prediction model.
S12、针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。S12. For each unplayed resource, preload the unplayed resource based on the expected play amount.
在本实施例中,基于所述预计播放量对所述未播资源进行预加载,包括:基于所述预计播放量确定预加载配置信息,基于预加载配置信息对未播资源进行预加载。In this embodiment, the preloading of the unplayed resources based on the expected playback volume includes: determining preloading configuration information based on the expected playback volume, and preloading the unplayed resources based on the preloading configuration information.
在一个实施方式中,当预测到用户后续资源的播放时长很短时,可以选择性地减少预加载资源的大小,例如300(Kilobyte,KB)的大小即可,达到了节省流量的目的。或者预测到用户未播资源时长很长时,可以选择性地增加预加载资源的大小,例如2兆字节(Megabytes,MB),以达到减少播放过程中卡顿的目的。In one embodiment, when it is predicted that the playback duration of the user's subsequent resources is very short, the size of the preloaded resources can be selectively reduced, for example, the size of 300 (Kilobyte, KB) is sufficient, thus achieving the purpose of saving traffic. Or when it is predicted that the user does not play the resource for a long time, the size of the preloaded resource can be selectively increased, for example, 2 megabytes (Megabytes, MB), so as to achieve the purpose of reducing the freeze during the playback process.
对于每个未播资源的预加载顺序,可以根据feed流中未播资源的顺序确定。也可以计算每个未播资源对应的优先级,基于优先级的高低确定未播资源的预加载顺序。本实施例中,仅对未播资源的预加载顺序进行说明,而非限定。The preloading order of each unplayed resource can be determined according to the order of unplayed resources in the feed stream. The priority corresponding to each unplayed resource may also be calculated, and the preloading order of the unplayed resources may be determined based on the priority. In this embodiment, only the preloading sequence of the unplayed resources is described, but not limited.
本公开实施例提供一种资源预加载方法包括:基于预测模型确定当前信息流中每个未播资源的预计播放量;针对每个未播资源,基于预计播放量对未播资源进行预加载。本公开实施例的技术方案中通过智能预测未播放资源的播放量,动态地为用户提供个性化的预加载方案,节省了不必要的流量浪费,提升用户体验。An embodiment of the present disclosure provides a resource preloading method, which includes: determining an estimated play amount of each unplayed resource in a current information stream based on a prediction model; and for each unplayed resource, preloading the unplayed resource based on the predicted play amount. In the technical solutions of the embodiments of the present disclosure, by intelligently predicting the playback volume of unplayed resources, a personalized preloading solution is dynamically provided to users, which saves unnecessary traffic waste and improves user experience.
在上述实施例的基础上,本公开实施例说明了资源预加载方法,图3是本公开实施例提供的一种资源预加载方法的流程图,如图3所示,本公开实施例提供的资源预加载方法主要包括如下步骤。On the basis of the above-mentioned embodiment, the embodiment of the present disclosure describes a resource preloading method. FIG. 3 is a flowchart of a resource preloading method provided by the embodiment of the present disclosure. As shown in FIG. The resource preloading method mainly includes the following steps.
S21、检测当前播放资源是否加载完成。S21. Detect whether the loading of the current playback resource is completed.
在本实施例中,当前播放资源可以理解为客户端当前显示屏正在播放的资源。加载完成是指当前播放资源的所有内容均已缓存至客户端。In this embodiment, the currently playing resource may be understood as the resource currently being played on the current display screen of the client. Loading complete means that all the content of the currently playing resource has been cached to the client.
在一个实施方式中,检测当前播放资源是否加载完成可以是检测是否接收到当前播放资源加载完成的标识,所述标识可以客户端本身生成,也可由服务器发送。In one embodiment, the detection of whether the loading of the current playback resource is completed may be to detect whether an identifier of the completion of the loading of the current playback resource is received, and the identifier may be generated by the client itself or sent by the server.
在一个实施方式中,检测当前播放资源是否加载完成还可以是检测当前播放资源已加载的字节长度与当前播放资源的总字节长度是否相等。In one embodiment, the detection of whether the loading of the current playback resource is completed may also be to detect whether the loaded byte length of the current playback resource is equal to the total byte length of the current playback resource.
在一个实施方式中,检测当前播放资源是否加载完成还可以是检测当前播放资源已加载的时长与当前播放资源的总时长是否相等。In one embodiment, the detection of whether the loading of the current playback resource is completed may also be to detect whether the loaded duration of the current playback resource is equal to the total duration of the current playback resource.
本实施例中仅对资源是否加载完成的方法进行说明,而非限定,可以根据实际情况选择其他方式判断资源是否加载完成。In this embodiment, only the method for whether the resource is loaded is described, but not limited, and other methods may be selected according to the actual situation to determine whether the resource is loaded.
S22、在检测到所述当前播放资源加载完成后,基于预测模型确定当前信息流中每个未播资源的预计播放量。S22. After detecting that the current playback resource is loaded, determine the expected playback amount of each unplayed resource in the current information stream based on the prediction model.
在一个实施方式中,如果接收到当前播放资源加载完成的标识,则确定所 述当前资源加载完成。In one embodiment, if an indication that the loading of the current playback resource is completed is received, it is determined that the loading of the current resource is completed.
在一个实施方式中,检测当前播放资源已加载的字节长度与当前播放资源的总字节长度相等,则确定所述当前资源加载完成。In one embodiment, it is determined that the loading of the current resource is completed by detecting that the loaded byte length of the current playback resource is equal to the total byte length of the current playback resource.
在一个实施方式中,检测到当前播放资源已加载的时长与当前播放资源的总时长相等,则确定所述当前资源加载完成。In one embodiment, it is determined that the loading of the current resource is completed when it is detected that the loaded duration of the current playback resource is equal to the total duration of the current playback resource.
在检测到所述当前资源加载完成之前,检测到用户切换至新的资源,即当前播放资源进行了更新,则将切换后的资源作为当前播放资源,执行检测当前播放资源是否加载完成的操作。Before it is detected that the current resource is loaded, it is detected that the user switches to a new resource, that is, the current playback resource is updated, and the switched resource is used as the current playback resource, and an operation of detecting whether the current playback resource is loaded is performed.
在本实施例中,只有当前播放资源下载完成后,再执行资源预加载操作。这样,可以避免当前播放资源的卡顿现象,提升用户体验。In this embodiment, the resource preloading operation is performed only after the current playback resource is downloaded. In this way, the stuck phenomenon of the current playback resource can be avoided, and the user experience can be improved.
S23、针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载的同时,检测当前用户是否切换到新的资源。S23. For each unplayed resource, while preloading the unplayed resource based on the expected playback amount, detect whether the current user switches to a new resource.
在本实施例中,新的资源是指除当前播放资源之外的其他资源。检测当前用户是否切换到新的资源可以是检测用户是否输入切换指令,还可以是检测播放器是否更换了播放资源信息。In this embodiment, the new resource refers to other resources other than the current playback resource. Detecting whether the current user switches to a new resource may be detecting whether the user inputs a switching instruction, or detecting whether the player has changed the playback resource information.
其中,切换指令可以是用户在客户端上的点击指令,下滑指令,上滑指令,向左滑动指令或者向右滑动指令等。播放资源信息包括如下一种或多种:播放资源的发布者信息,播放资源的名称,播放资源的网际互连协议(Internet Protocol,IP)地址,播放资源的数据包等等。The switching instruction may be a user's click instruction on the client terminal, a sliding instruction, an upward sliding instruction, a left sliding instruction, or a right sliding instruction, and the like. The playback resource information includes one or more of the following: publisher information of the playback resource, name of the playback resource, Internet Protocol (IP) address of the playback resource, data packets of the playback resource, and the like.
S24、在所述当前用户切换到新的资源后,将所述新的资源作为当前播放资源,返回执行S21。S24. After the current user switches to the new resource, use the new resource as the current playback resource, and return to executing S21.
当前用户切换到新的资源可以是检测到用户输入切换指令,还可以是检测到播放器更换了播放资源信息。When the current user switches to the new resource, it may be detected that the user inputs a switching instruction, or it may be detected that the player has changed the playback resource information.
在所述当前用户切换到新的资源后,将所述新的资源作为当前播放资源,执行检测当前播放资源是否加载完成的操作。After the current user switches to the new resource, the new resource is used as the current playback resource, and the operation of detecting whether the current playback resource is loaded is completed.
在本实施例中,下载用户当前播放资源具有最高优先级,无论何时用户切换到新的资源进行播放,都要优先将当前播放资源下载完成后,再执行预加载操作。这样,可以避免当前播放资源的卡顿现象,提升用户体验。In this embodiment, downloading the user's current playback resource has the highest priority, and whenever the user switches to a new resource for playback, the preloading operation is performed first after the current playback resource is downloaded. In this way, the stuck phenomenon of the current playback resource can be avoided, and the user experience can be improved.
在上述实施例的基础上,本公开实施例提供两种确定当前信息流中每个未播资源的预计播放量的方法。On the basis of the foregoing embodiments, the embodiments of the present disclosure provide two methods for determining the expected playback amount of each unplayed resource in the current information stream.
在一个实施方式中,基于预测模型确定当前信息流中每个未播资源的预计 播放量,包括:基于预测模型确定当前用户类型;基于所述当前用户类型和预存的用户类型和播放量的对应关系确定当前信息流中每个未播资源的预计播放量。In one embodiment, determining the expected playback volume of each unplayed resource in the current information stream based on the prediction model includes: determining the current user type based on the prediction model; Relationships determine the expected amount of playback for each unplayed asset in the current stream.
在本实施例中,所述用户类型可以理解为播放指定类型的资源达到预设时长的用户对应的类型。In this embodiment, the user type may be understood as a type corresponding to a user who plays a specified type of resource for a preset duration.
预存的用户类型和播放量之间的对应关系是一一对应的关系,但是不同的用户类型可以对应相同的播放量。There is a one-to-one correspondence between the pre-stored user types and the playback volume, but different user types may correspond to the same playback volume.
在本实施例中,确定用户类型之后,基于当前用户类型在用户类型和播放量的对应关系中进行查找,将当前用户类型对应的播放量确定为预计播放量。In this embodiment, after the user type is determined, the corresponding relationship between the user type and the playback volume is searched based on the current user type, and the playback volume corresponding to the current user type is determined as the expected playback volume.
在一个实施方式中,基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:基于预测模型确定当前用户类型;基于所述当前用户类型和预存的用户类型和播放比例的对应关系确定当前信息流中每个未播资源的预计播放比例;针对每个未播资源,基于所述预计播放比例确定预计播放量。In one embodiment, determining the expected playback volume of each unplayed resource in the current information stream based on the prediction model includes: determining the current user type based on the prediction model; The relationship determines the expected playback ratio of each unplayed resource in the current information stream; for each unplayed resource, the expected playback amount is determined based on the expected playback ratio.
在本实施例中,确定用户类型之后,基于当前用户类型在用户类型和播放比例的对应关系中进行查找,确定当前用户类型对应的播放比例。所述播放比例可以理解为一类资源已播放时长与资源总时长之间的比例。In this embodiment, after the user type is determined, the corresponding relationship between the user type and the playback ratio is searched based on the current user type, and the playback ratio corresponding to the current user type is determined. The playback ratio can be understood as the ratio between the played duration of a type of resource and the total duration of the resource.
在本实施例中,获取到播放比例后,将未播资源的总时长与所述播放比例的乘积作为该未播资源的预计播放时长;或者,获取到播放比例后,将未播资源的总字节数与所述播放比例的乘积作为该未播资源的预计播放字节数。In this embodiment, after the playback ratio is obtained, the product of the total duration of the unplayed resources and the playback ratio is used as the expected playback duration of the unplayed resource; or, after the playback ratio is obtained, the total duration of the unplayed resources The product of the number of bytes and the playback ratio is used as the expected number of playback bytes of the unplayed resource.
基于预测模型确定当前用户类型,包括:获取播放相关的历史信息和资源信息,其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长;将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型。Determining the current user type based on the prediction model includes: acquiring playback-related historical information and resource information, wherein the resource information includes the playback duration of each played resource, each type of unplayed resource, and the duration of each unplayed resource; The play-related historical information and resource information are input into the prediction model to obtain the current user type.
其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长;播放相关的历史信息是指客户端播放历史资源时获取到的相关信息,例如:历史资源的点赞数量,历史资源的评论等。播放相关的历史信息还可以包括用户提供的偏好信息。The resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource; the playback-related historical information refers to the relevant information obtained when the client plays the historical resource, for example: history The number of likes of resources, comments of historical resources, etc. The play-related historical information may also include user-provided preference information.
将播放相关的历史信息和资源信息输入至预测模型得到当前用户类型,避免了冗余的用户类型确定步骤,可以方便快捷的确定当前用户类型,提高设备运行速度。The current user type is obtained by inputting the playback-related historical information and resource information into the prediction model, which avoids redundant user type determination steps, can conveniently and quickly determine the current user type, and improves the running speed of the device.
将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型,包括:将所述播放相关的历史信息和资源信息输入至所述预测模型,得到多个用户类型概率,所述用户类型概率是当前用户属于一种类型用户的概 率;将多个用户类型概率中最大用户类型概率对应的用户类型确定为当前用户类型。inputting the playback-related historical information and resource information into the prediction model to obtain the current user type, including: inputting the playback-related historical information and resource information into the prediction model to obtain multiple user type probabilities, The user type probability is the probability that the current user belongs to one type of user; the user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长;播放相关的历史信息是指客户端播放历史资源时获取到的相关信息,例如:历史资源的点赞数量,历史资源的评论等。播放相关的历史信息还可以包括用户提供的偏好信息。The resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource; the playback-related historical information refers to the relevant information obtained when the client plays the historical resource, for example: history The number of likes of resources, comments of historical resources, etc. The play-related historical information may also include user-provided preference information.
在本实施例中,预测模型输出多个用户类型概率,将多个用户类型概率进行比较,或者排序处理,得到最大用户类型概率,并将最大用户类型概率对应的用户类型确定为当前用户类型。In this embodiment, the prediction model outputs multiple user type probabilities, compares the multiple user type probabilities, or sorts them to obtain the maximum user type probability, and determines the user type corresponding to the maximum user type probability as the current user type.
本实施例中,根据预测模型输出的多个用户类型概率来确定当前用户类型,可以提高用户类型确定的准确性。In this embodiment, the current user type is determined according to multiple user type probabilities output by the prediction model, which can improve the accuracy of user type determination.
图4是本公开实施例提供的一种预测模型的结构示意图,如图4所示,预测模型主要包括输入层,n个中间层和输出层,输入层接收到的特征参数主要包括:播放相关的历史信息、各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长等。经过n个中间层进行学习和预测之后,输出层输出该用户属于第n类视频的概率。n为1,2……k之间的任一数值,k为所有用户类型的总数。FIG. 4 is a schematic structural diagram of a prediction model provided by an embodiment of the present disclosure. As shown in FIG. 4 , the prediction model mainly includes an input layer, n intermediate layers and an output layer, and the characteristic parameters received by the input layer mainly include: , the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource, etc. After learning and prediction by n intermediate layers, the output layer outputs the probability that the user belongs to the nth category of video. n is any value between 1, 2...k, where k is the total number of all user types.
将预测模型输出的概率值进行比较,将最大概率对应的用户类型确定为当前用户类型。The probability values output by the prediction model are compared, and the user type corresponding to the maximum probability is determined as the current user type.
本实施例中,采用将多维特征输入到预训练的神经网络模型得到当前feed流中未播资源在当前用户的行为模式下预计播放时长的技术方案,可以提高预计播放时长确定的准确性。In this embodiment, the technical solution of inputting multi-dimensional features into a pre-trained neural network model to obtain the estimated playback duration of the unplayed resources in the current feed stream under the current user's behavior mode can improve the accuracy of determining the estimated playback duration.
在一个应用性实例中,以未播资源是未播视频资源为例进行说明。图5是本公开实施例提供的一种视频预加载的流程图,如图5所示,当用户开始观看视频时,优先下载用户当前在播视频,如果当前在播视频下载完成前,用户切换进入下一个视频,则将下一个视频作为当前在播视频,优先下载用户当前在播视频。如果当前在播视频下载完成后,用户没有进入下一个视频,将播放相关的历史信息和视频信息喂到训练好的预测模型中进行预计播放量的预测。根据预测模型的预测结果选择预加载配置,开始视频预加载,在视频预加载过程中,用户切换进入下一个视频,则将下一个视频作为当前在播视频,优先下载用户当前在播视频。或者,直到视频预加载结束。In an applicable example, the unplayed resource is an unplayed video resource as an example for description. FIG. 5 is a flowchart of a video preloading provided by an embodiment of the present disclosure. As shown in FIG. 5 , when a user starts to watch a video, the video currently being played by the user is preferentially downloaded. When entering the next video, the next video will be regarded as the currently playing video, and the user's currently playing video will be downloaded first. If the user does not enter the next video after the download of the currently-playing video is completed, the playback-related historical information and video information are fed into the trained prediction model to predict the expected playback volume. Select the preloading configuration according to the prediction result of the prediction model, and start the video preloading. During the video preloading process, if the user switches to the next video, the next video will be regarded as the currently playing video, and the user's currently playing video will be downloaded first. Or, until the video preload ends.
在上述流程中,下载用户当前在播视频具有最高优先级,无论何时用户切换到新的视频进行播放,都要优先将当前播放视频下载完成后再执行预加载操作。In the above process, downloading the currently playing video of the user has the highest priority, and whenever the user switches to a new video for playback, the preloading operation is performed after the currently playing video is downloaded first.
本公开实施例提供的视频预加载方法,相较于无差别预加载更加具有个性化。例如,相关技术中的预加载无差别地对后续视频加载1MB大小,经过本公开实施例提供的视频预加载方法,当预测到用户后续播放视频时长很短时,只需对后续视频加载300KB的大小即可,达到了节省流量的目的,或者预测到用户后续播放视频时长很长时,可以选择性地增加预加载大小,例如2MB,以达到减少播放过程中卡顿的目的。Compared with indiscriminate preloading, the video preloading method provided by the embodiments of the present disclosure is more personalized. For example, the preloading in the related art loads the subsequent video with a size of 1MB indiscriminately. With the video preloading method provided by the embodiment of the present disclosure, when it is predicted that the user will play a video with a very short duration, it is only necessary to load the subsequent video with a size of 300KB. The size is enough to save traffic, or when the user predicts that the subsequent video playback will be long, you can selectively increase the preload size, such as 2MB, to reduce the freeze during playback.
图6是本公开实施例提供的一种资源预加载装置的结构图,本实施例可适用于动态预加载feed流中的视频资源的情况,所述资源预加载装置可以通过软件和/或硬件的方式来实现。所述资源预加载方法应用于客户端中。FIG. 6 is a structural diagram of a resource preloading apparatus provided by an embodiment of the present disclosure. This embodiment is applicable to the case of dynamically preloading video resources in a feed stream, and the resource preloading apparatus can be preloaded through software and/or hardware. way to achieve. The resource preloading method is applied to the client.
如6所示,本实施例提供的资源预加载装置主要包括预计播放量确定模块61和预加载模块62。As shown in FIG. 6 , the resource preloading apparatus provided in this embodiment mainly includes an expected playback amount determination module 61 and a preloading module 62 .
预计播放量确定模块61,设置为基于预测模型确定当前信息流中每个未播资源的预计播放量;The expected playback volume determination module 61 is configured to determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
预加载模块62,设置为针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。The preloading module 62 is configured to, for each unplayed resource, preload the unplayed resource based on the expected playing amount.
本公开实施例提供的一种资源预加载装置,用于执行如下操作:基于预测模型确定当前信息流中每个未播资源的预计播放量;针对每个未播资源,基于预计播放量对未播资源进行预加载。本公开实施例的技术方案中通过智能预测未播放资源的播放量,动态地为用户提供个性化的预加载方案,节省了不必要的流量浪费,提升用户体验。An apparatus for resource preloading provided by an embodiment of the present disclosure is configured to perform the following operations: determine the expected playback volume of each unplayed resource in a current information stream based on a prediction model; for each unplayed resource, perform the following operations on the Preload the broadcast resources. In the technical solutions of the embodiments of the present disclosure, by intelligently predicting the playback volume of unplayed resources, a personalized preloading solution is dynamically provided to users, which saves unnecessary traffic waste and improves user experience.
在一个实施方式中,预计播放量确定模块61,是设置为检测当前播放资源是否加载完成;在检测到所述当前播放资源加载完成后,执行基于预测模型确定当前信息流中每个未播资源的预计播放量的步骤。In one embodiment, the estimated playback amount determination module 61 is configured to detect whether the current playback resource is loaded; after detecting that the current playback resource is loaded, it executes a prediction model to determine each unplayed resource in the current information stream. Estimated playback steps.
在一个实施方式中,所述装置包括切换检测模块,设置为基于所述预计播放量对所述未播资源进行预加载的同时,检测当前用户是否切换到新的资源;In one embodiment, the apparatus includes a switch detection module, configured to detect whether the current user switches to a new resource while preloading the unplayed resource based on the expected playback amount;
预计播放量确定模块61,是设置为在所述当前用户切换到新的资源后,将所述新的资源作为当前播放资源,执行检测当前播放资源是否加载完成的步骤。The expected playback amount determination module 61 is configured to use the new resource as the current playback resource after the current user switches to the new resource, and perform the step of detecting whether the current playback resource is loaded.
在一个实施方式中,预计播放量确定模块61,包括:In one embodiment, the expected playback amount determination module 61 includes:
用户类型确定单元,设置为基于预测模型确定当前用户类型;A user type determination unit, set to determine the current user type based on the prediction model;
预计播放量确定单元,设置为基于所述当前用户类型和预存的用户类型和播放量的对应关系确定当前信息流中每个未播资源的预计播放量。The expected playback amount determination unit is configured to determine the expected playback amount of each unplayed resource in the current information stream based on the current user type and the pre-stored correspondence between the user type and the playback amount.
在一个实施方式中,预计播放量确定模块61,包括:In one embodiment, the expected playback amount determination module 61 includes:
用户类型确定单元,设置为基于预测模型确定当前用户类型;A user type determination unit, set to determine the current user type based on the prediction model;
预计播放比例单元,设置为基于所述当前用户类型和预存的用户类型和播放比例的对应关系确定当前信息流中每个未播资源的预计播放比例;The expected playback ratio unit is set to determine the expected playback ratio of each unplayed resource in the current information stream based on the corresponding relationship between the current user type and the pre-stored user type and playback ratio;
预计播放量确定单元,设置为针对每个未播资源,基于所述预计播放比例确定预计播放量。The expected playback amount determination unit is configured to determine the expected playback amount based on the expected playback ratio for each unplayed resource.
在一个实施方式中,用户类型确定单元,设置为获取播放相关的历史信息和资源信息;将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型,其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长。In one embodiment, the user type determination unit is configured to obtain play-related historical information and resource information; input the play-related historical information and resource information into the prediction model to obtain the current user type, wherein the The resource information includes the playing duration of each played resource, the type of each unplayed resource, and the duration of each unplayed resource.
在一个实施方式中,将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型,包括:In one embodiment, inputting the play-related historical information and resource information into the prediction model to obtain the current user type, including:
将所述播放相关的历史信息和资源信息输入至所述预测模型,得到多个用户类型概率,所述用户类型概率是当前用户属于一种类型用户的概率;Inputting the playback-related historical information and resource information into the prediction model to obtain multiple user type probabilities, where the user type probabilities are the probabilities that the current user belongs to one type of user;
将多个用户类型概率中最大用户类型概率对应的用户类型确定为当前用户类型。The user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
本实施例所提供的资源预加载装置可执行本公开任意实施例所提供的资源预加载方法,具备执行资源预加载方法相应的功能模块和效果。The resource preloading apparatus provided in this embodiment can execute the resource preloading method provided by any embodiment of the present disclosure, and has functional modules and effects corresponding to executing the resource preloading method.
下面参考图7,其示出了适于用来实现本公开实施例的资源预加载设备(例如图7中的终端设备或服务端)700的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、PDA、PAD、PMP、车载终端(例如车载导航终端)等等的移动终端以及诸如数字TV、台式计算机等等的固定终端。图7示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。Referring next to FIG. 7 , it shows a schematic structural diagram of a resource preloading device (eg, a terminal device or a server in FIG. 7 ) 700 suitable for implementing an embodiment of the present disclosure. Terminal devices in the embodiments of the present disclosure may include, but are not limited to, mobile terminals such as mobile phones, notebook computers, digital broadcast receivers, PDAs, PADs, PMPs, in-vehicle terminals (eg, in-vehicle navigation terminals), etc., as well as mobile terminals such as digital TV, desktop Stationary terminals for computers, etc. The electronic device shown in FIG. 7 is only an example, and should not impose any limitation on the function and scope of use of the embodiments of the present disclosure.
如图7所示,电子设备700可以包括处理装置(例如中央处理器、图形处理器等)701,其可以根据存储在只读存储器(Read-only Memory,ROM)702中的程序或者从存储装置708加载到随机访问存储器(Random Access Memory,RAM)703中的程序而执行多种适当的动作和处理。在RAM 703中,还存储有 电子设备700操作所需的多种程序和数据。处理装置701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(Input/Output,I/O)接口705也连接至总线704。As shown in FIG. 7 , the electronic device 700 may include a processing device (eg, a central processing unit, a graphics processor, etc.) 701, which may be based on a program stored in a read-only memory (Read-only Memory, ROM) 702 or from a storage device 708 programs loaded into Random Access Memory (RAM) 703 to perform various appropriate actions and processes. In the RAM 703, various programs and data necessary for the operation of the electronic device 700 are also stored. The processing device 701, the ROM 702, and the RAM 703 are connected to each other through a bus 704. An Input/Output (I/O) interface 705 is also connected to the bus 704 .
以下装置可以连接至I/O接口705:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置706;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置707;包括例如磁带、硬盘等的存储装置708;以及通信装置709。通信装置709可以允许电子设备700与其他设备进行无线或有线通信以交换数据。虽然图7示出了具有多种装置的电子设备700,但是并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。The following devices may be connected to the I/O interface 705: Input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD), speakers output device 707 , vibrator, etc.; storage device 708 including, for example, magnetic tape, hard disk, etc.; and communication device 709 . Communication means 709 may allow electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. Although FIG. 7 shows an electronic device 700 having various means, it is not required to implement or have all of the illustrated means. More or fewer devices may alternatively be implemented or provided.
根据本公开的实施例,上文参考流程图描述的过程可以被实现为计算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置709从网络上被下载和安装,或者从存储装置708被安装,或者从ROM 702被安装。在该计算机程序被处理装置701执行时,执行本公开实施例的方法中限定的上述功能。According to embodiments of the present disclosure, the processes described above with reference to the flowcharts may be implemented as computer software programs. For example, embodiments of the present disclosure include a computer program product comprising a computer program carried on a non-transitory computer readable medium, the computer program containing program code for performing the method illustrated in the flowchart. In such an embodiment, the computer program may be downloaded and installed from the network via the communication device 709, or from the storage device 708, or from the ROM 702. When the computer program is executed by the processing device 701, the above-mentioned functions defined in the methods of the embodiments of the present disclosure are executed.
本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的系统、装置或器件,或者任意以上的组合。计算机可读存储介质可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、RAM、ROM、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)、闪存、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行系统、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行系统、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。存储介质可以是非暂态(non-transitory)存储介质。The computer-readable medium described above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the two. The computer-readable storage medium can be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus or device, or a combination of any of the above. Computer readable storage media may include, but are not limited to: electrical connections with one or more wires, portable computer disks, hard disks, RAM, ROM, Erasable Programmable Read-Only Memory (EPROM), Flash memory, optical fiber, portable Compact Disc Read-Only Memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above. In this disclosure, a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device. In the present disclosure, however, a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave with computer-readable program code embodied thereon. Such propagated data signals may take a variety of forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing. A computer-readable signal medium can also be any computer-readable medium other than a computer-readable storage medium that can transmit, propagate, or transport the program for use by or in connection with the instruction execution system, apparatus, or device . The program code embodied on the computer-readable medium can be transmitted by any suitable medium, including but not limited to: electric wire, optical fiber cable, radio frequency (RF), etc., or any suitable combination of the above. The storage medium may be a non-transitory storage medium.
在一些实施方式中,用户端、服务端可以利用诸如超文本传输协议(HyperText Transfer Protocol,HTTP)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。In some embodiments, the client and server can use any currently known or future developed network protocols such as HyperText Transfer Protocol (HTTP) to communicate, and can communicate with digital data in any form or medium. Data communications (eg, communications networks) are interconnected. Examples of communication networks include Local Area Networks (LANs), Wide Area Networks (WANs), the Internet (eg, the Internet), and peer-to-peer networks (eg, ad hoc peer-to-peer networks), as well as any currently Known or future developed networks.
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。The above-mentioned computer-readable medium may be included in the above-mentioned electronic device; or may exist alone without being assembled into the electronic device.
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:The above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device:
基于预测模型确定当前信息流中每个未播资源的预计播放量;Determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务端上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括LAN或WAN—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。Computer program code for performing operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and This includes conventional procedural programming languages - such as the "C" language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server. Where a remote computer is involved, the remote computer may be connected to the user computer through any kind of network, including a LAN or WAN, or may be connected to an external computer (eg, using an Internet service provider to connect through the Internet).
附图中的流程图和框图,图示了按照本公开多种实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。The flowchart and block diagrams in the Figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more logical functions for implementing the specified functions executable instructions. In some alternative implementations, the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. Each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations, can be implemented in special purpose hardware-based systems that perform the specified functions or operations, or special purpose hardware implemented in combination with computer instructions.
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元的名称在一种情况下并不构成对该单元本 身的限定。The units involved in the embodiments of the present disclosure may be implemented in a software manner, and may also be implemented in a hardware manner. Among them, the name of the unit does not constitute a limitation of the unit itself in one case.
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上系统(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: Field Programmable Gate Arrays (FPGAs), Application Specific Integrated Circuits (ASICs), Application Specific Standard Products (Application Specific Standard Products) Standard Parts, ASSP), system on chip (System on Chip, SOC), complex programmable logic device (Complex Programmable Logic Device, CPLD) and so on.
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、RAM、ROM、EPROM、快闪存储器、光纤、便捷式CD-ROM、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of the present disclosure, a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. Machine-readable storage media include one or more wire-based electrical connections, portable computer disks, hard disks, RAM, ROM, EPROM, flash memory, optical fibers, portable CD-ROMs, optical storage devices, magnetic storage devices, or the above any suitable combination of content.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device and medium are provided, including:
基于预测模型确定当前信息流中每个未播资源的预计播放量;Determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
检测当前播放资源是否加载完成;Check whether the current playback resource is loaded;
在检测到所述当前播放资源加载完成后,执行基于预测模型确定当前信息流中每个未播资源的预计播放量的步骤。After it is detected that the loading of the current playback resource is completed, the step of determining the expected playback amount of each unplayed resource in the current information stream based on the prediction model is performed.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,基于所述预计播放量对所述未播资源进行预加载的同时,还包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided. While preloading the unplayed resource based on the expected playback amount, the method further includes:
检测当前用户是否切换到新的资源;Detect whether the current user switches to a new resource;
在所述当前用户切换到新的资源后,将所述新的资源作为当前播放资源,执行检测当前播放资源是否加载完成的步骤。After the current user switches to the new resource, the new resource is used as the current playback resource, and the step of detecting whether the loading of the current playback resource is completed is performed.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
基于预测模型确定当前用户类型;Determine the current user type based on the predictive model;
基于所述当前用户类型和预存的用户类型和播放量的对应关系确定当前信息流中每个未播资源的预计播放量。Based on the current user type and the pre-stored correspondence between the user type and the playing volume, the expected playing volume of each unplayed resource in the current information stream is determined.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided, and the estimated playback amount of each unplayed resource in the current information stream is determined based on a prediction model, including:
基于预测模型确定当前用户类型;Determine the current user type based on the predictive model;
基于所述当前用户类型和预存的用户类型和播放比例的对应关系确定当前信息流中每个未播资源的预计播放比例;Determine the expected playback ratio of each unplayed resource in the current information stream based on the corresponding relationship between the current user type and the pre-stored user type and playback ratio;
针对每个未播资源,基于所述预计播放比例确定预计播放量。For each unplayed resource, the predicted play amount is determined based on the predicted play ratio.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,基于预测模型确定当前用户类型,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided, and the current user type is determined based on a prediction model, including:
获取播放相关的历史信息和资源信息,其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长;Acquiring playback-related historical information and resource information, wherein the resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource;
将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型。The play-related historical information and resource information are input into the prediction model to obtain the current user type.
根据本公开的一个或多个实施例,提供了资源预加载方法、装置、设备和介质,将所述播放相关的历史信息和资源信息输入至所述预测模型,得到当前用户类型,包括:According to one or more embodiments of the present disclosure, a resource preloading method, apparatus, device, and medium are provided, and the playback-related historical information and resource information are input into the prediction model to obtain the current user type, including:
将所述播放相关的历史信息和资源信息输入至所述预测模型,得到多个用户类型概率,所述用户类型概率是当前用户属于一类型用户的概率;Inputting the playback-related historical information and resource information into the prediction model to obtain multiple user type probabilities, where the user type probabilities are the probabilities that the current user belongs to a type of user;
将多个用户类型概率中最大用户类型概率对应的用户类型确定为当前用户类型。The user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
此外,虽然采用特定次序描绘了多个操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了多个实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的一些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的多种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。Additionally, although operations are depicted in a particular order, this should not be construed as requiring that the operations be performed in the particular order shown or in a sequential order. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while the above discussion contains several implementation details, these should not be construed as limitations on the scope of the present disclosure. Some features that are described in the context of separate embodiments can also be implemented in combination in a single embodiment. Conversely, various features that are described in the context of a single embodiment can also be implemented in multiple embodiments separately or in any suitable subcombination.
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。Although the subject matter has been described in language specific to structural features and/or logical acts of method, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are merely example forms of implementing the claims.

Claims (10)

  1. 一种资源预加载方法,包括:A resource preloading method, including:
    基于预测模型确定当前信息流中每个未播资源的预计播放量;Determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
    针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。For each unplayed resource, the unplayed resource is preloaded based on the expected play amount.
  2. 根据权利要求1所述的方法,其中,所述基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:The method according to claim 1, wherein the determining the estimated playback volume of each unplayed resource in the current information stream based on the prediction model comprises:
    检测当前播放资源是否加载完成;Check whether the current playback resource is loaded;
    在检测到所述当前播放资源加载完成后,基于预测模型确定当前信息流中每个未播资源的预计播放量。After it is detected that the loading of the current playing resource is completed, the predicted playing amount of each unplayed resource in the current information stream is determined based on the prediction model.
  3. 根据权利要求2所述的方法,其中,在所述基于所述预计播放量对所述未播资源进行预加载的同时,还包括:The method according to claim 2, wherein, while the preloading the unplayed resource based on the expected playback amount, the method further comprises:
    检测当前用户是否切换到新的资源;Detect whether the current user switches to a new resource;
    在所述当前用户切换到所述新的资源后,将所述新的资源作为当前播放资源,检测当前播放资源是否加载完成。After the current user switches to the new resource, the new resource is used as the current playback resource, and it is detected whether the loading of the current playback resource is completed.
  4. 根据权利要求1所述的方法,其中,所述基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:The method according to claim 1, wherein the determining the estimated playback volume of each unplayed resource in the current information stream based on the prediction model comprises:
    基于所述预测模型确定当前用户类型;determining the current user type based on the predictive model;
    基于所述当前用户类型与预存的用户类型和播放量的对应关系确定所述当前信息流中每个未播资源的预计播放量。The expected playback volume of each unplayed resource in the current information stream is determined based on the correspondence between the current user type and the pre-stored user type and playback volume.
  5. 根据权利要求1所述的方法,其中,所述基于预测模型确定当前信息流中每个未播资源的预计播放量,包括:The method according to claim 1, wherein the determining the estimated playback volume of each unplayed resource in the current information stream based on the prediction model comprises:
    基于所述预测模型确定当前用户类型;determining the current user type based on the predictive model;
    基于所述当前用户类型与预存的用户类型和播放比例的对应关系确定所述当前信息流中每个未播资源的预计播放比例;Determine the expected playback ratio of each unplayed resource in the current information stream based on the correspondence between the current user type and the pre-stored user type and playback ratio;
    针对每个未播资源,基于所述预计播放比例确定所述预计播放量。For each unplayed resource, the predicted play amount is determined based on the predicted play ratio.
  6. 根据权利要求4或5所述的方法,其中,所述基于所述预测模型确定当前用户类型,包括:The method according to claim 4 or 5, wherein the determining the current user type based on the prediction model comprises:
    获取播放相关的历史信息和资源信息,其中,所述资源信息包括各个已播资源的播放时长、各个未播资源类型和各个未播资源的时长;Acquiring playback-related historical information and resource information, wherein the resource information includes the playback duration of each broadcasted resource, the type of each unplayed resource, and the duration of each unplayed resource;
    将所述播放相关的历史信息和资源信息输入至所述预测模型,得到所述当前用户类型。The play-related historical information and resource information are input into the prediction model to obtain the current user type.
  7. 根据权利要求6所述的方法,其中,所述将所述播放相关的历史信息和资源信息输入至所述预测模型,得到所述当前用户类型,包括:The method according to claim 6, wherein the inputting the play-related historical information and resource information into the prediction model to obtain the current user type comprises:
    将所述播放相关的历史信息和资源信息输入至所述预测模型,得到多个用户类型概率,其中,所述用户类型概率是当前用户属于一种类型用户的概率;Inputting the playback-related historical information and resource information into the prediction model to obtain multiple user type probabilities, wherein the user type probabilities are the probabilities that the current user belongs to one type of user;
    将所述多个用户类型概率中最大用户类型概率对应的用户类型确定为所述当前用户类型。A user type corresponding to the largest user type probability among the multiple user type probabilities is determined as the current user type.
  8. 一种资源预加载装置,包括:A resource preloading device, comprising:
    预计播放量确定模块,设置为基于预测模型确定当前信息流中每个未播资源的预计播放量;The expected playback volume determination module is set to determine the expected playback volume of each unplayed resource in the current information stream based on the prediction model;
    预加载模块,设置为针对每个未播资源,基于所述预计播放量对所述未播资源进行预加载。The preloading module is configured to, for each unplayed resource, preload the unplayed resource based on the expected playing amount.
  9. 一种资源预加载设备,包括:A resource preloading device including:
    至少一个处理器;at least one processor;
    存储器,设置为存储至少一个程序;a memory, arranged to store at least one program;
    当所述至少一个程序被所述至少一个处理器执行,使得所述至少一个处理器实现如权利要求1-7中任一项所述的资源预加载方法。When the at least one program is executed by the at least one processor, the at least one processor implements the resource preloading method according to any one of claims 1-7.
  10. 一种计算机可读存储介质,其中,所述计算机可读存储介质存储有计算机程序,所述计算机程序被处理器执行时实现如权利要求1-7中任一项所述的资源预加载方法。A computer-readable storage medium, wherein the computer-readable storage medium stores a computer program, and when the computer program is executed by a processor, implements the resource preloading method according to any one of claims 1-7.
PCT/CN2022/077202 2021-03-12 2022-02-22 Resource preloading method, apparatus and device, and storage medium WO2022188618A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110269706.8A CN115086705A (en) 2021-03-12 2021-03-12 Resource preloading method, device, equipment and storage medium
CN202110269706.8 2021-03-12

Publications (1)

Publication Number Publication Date
WO2022188618A1 true WO2022188618A1 (en) 2022-09-15

Family

ID=83227369

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/077202 WO2022188618A1 (en) 2021-03-12 2022-02-22 Resource preloading method, apparatus and device, and storage medium

Country Status (2)

Country Link
CN (1) CN115086705A (en)
WO (1) WO2022188618A1 (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117579898B (en) * 2023-11-15 2024-11-05 书行科技(北京)有限公司 Video processing method, device, electronic equipment and storage medium

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034958A1 (en) * 2014-09-05 2016-03-10 Altron Tmt (Pty) Limited Media player with local mass data storage device and browser
CN107888981A (en) * 2017-11-16 2018-04-06 北京小米移动软件有限公司 Audio frequency and video preload method, apparatus, equipment and storage medium
CN108322819A (en) * 2018-01-18 2018-07-24 北京奇艺世纪科技有限公司 Predict the method and device of user behavior
CN109618216A (en) * 2018-12-25 2019-04-12 北京微播视界科技有限公司 Show method, apparatus, equipment and the storage medium of video stress state mark
WO2019133050A1 (en) * 2017-12-28 2019-07-04 Rovi Guides, Inc. Systems and methods for adaptively buffering media content at a digital video recorder
CN110209843A (en) * 2019-05-31 2019-09-06 腾讯科技(深圳)有限公司 Multimedia resource playback method, device, equipment and storage medium
CN112004120A (en) * 2019-05-27 2020-11-27 广州虎牙信息科技有限公司 Method, device, equipment and storage medium for predicting platform network resource playing amount

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102880688B (en) * 2012-09-14 2016-07-27 北京百度网讯科技有限公司 A kind of method for webpage is estimated, device and equipment
WO2018010119A1 (en) * 2016-07-13 2018-01-18 华为技术有限公司 Video service resource allocation method and device
CN107886132B (en) * 2017-11-24 2021-07-16 云南大学 Time series decomposition method and system for solving music traffic prediction
CN111523920B (en) * 2019-04-04 2024-02-23 维肯智能(深圳)有限公司 Information pushing method and device and terminal equipment
CN110222975A (en) * 2019-05-31 2019-09-10 北京奇艺世纪科技有限公司 A kind of loss customer analysis method, apparatus, electronic equipment and storage medium
CN110704674B (en) * 2019-09-05 2022-11-25 苏宁云计算有限公司 Video playing integrity prediction method and device
CN110825957B (en) * 2019-09-17 2023-04-11 中国平安人寿保险股份有限公司 Deep learning-based information recommendation method, device, equipment and storage medium
CN111735472A (en) * 2020-05-22 2020-10-02 百度在线网络技术(北京)有限公司 Navigation audio playing method, device, equipment and computer storage medium
CN112135169B (en) * 2020-09-18 2022-11-11 脸萌有限公司 Media content loading method, device, equipment and medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2016034958A1 (en) * 2014-09-05 2016-03-10 Altron Tmt (Pty) Limited Media player with local mass data storage device and browser
CN107888981A (en) * 2017-11-16 2018-04-06 北京小米移动软件有限公司 Audio frequency and video preload method, apparatus, equipment and storage medium
WO2019133050A1 (en) * 2017-12-28 2019-07-04 Rovi Guides, Inc. Systems and methods for adaptively buffering media content at a digital video recorder
CN108322819A (en) * 2018-01-18 2018-07-24 北京奇艺世纪科技有限公司 Predict the method and device of user behavior
CN109618216A (en) * 2018-12-25 2019-04-12 北京微播视界科技有限公司 Show method, apparatus, equipment and the storage medium of video stress state mark
CN112004120A (en) * 2019-05-27 2020-11-27 广州虎牙信息科技有限公司 Method, device, equipment and storage medium for predicting platform network resource playing amount
CN110209843A (en) * 2019-05-31 2019-09-06 腾讯科技(深圳)有限公司 Multimedia resource playback method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115086705A (en) 2022-09-20

Similar Documents

Publication Publication Date Title
WO2022237908A1 (en) Information display method and apparatus, electronic device, and storage medium
WO2021147462A1 (en) Label display method and apparatus, electronic device, and computer readable medium
WO2023051297A1 (en) Information display method and apparatus, electronic device, and storage medium
WO2022127523A1 (en) Video playback method and apparatus, device, and medium
CN112135169B (en) Media content loading method, device, equipment and medium
CN112312225B (en) Information display method and device, electronic equipment and readable medium
CN110516159B (en) Information recommendation method and device, electronic equipment and storage medium
WO2022078159A1 (en) Broadcasting method and device for live broadcast
EP4428718A1 (en) Video processing method and apparatus, electronic device, and storage medium
WO2023134559A1 (en) Comment prompting method and apparatus, and electronic device, storage medium and program product
CN110825481A (en) Method and device for displaying page information corresponding to page tag and electronic equipment
CN114827682B (en) Screen projection method, system, equipment and storage medium
CN114443897A (en) Video recommendation method and device, electronic equipment and storage medium
CN111290819A (en) Method and device for displaying operation prompt and electronic equipment
WO2023155716A1 (en) Video playing method and apparatus, and electronic device, storage medium and program product
WO2022188618A1 (en) Resource preloading method, apparatus and device, and storage medium
CN113542336A (en) Information switching sharing method and device, electronic equipment and storage medium
WO2023029821A1 (en) Video buffer playing method and apparatus, and electronic device and storage medium
WO2023151682A1 (en) Application start method and apparatus, electronic device, storage medium and program product
CN112181249A (en) Play control method and device, electronic equipment and storage medium
CN116304427A (en) Preloading method and device, storage medium and electronic equipment
CN112637668B (en) Video playing method, device, equipment and medium
CN116319932A (en) Training method, device, equipment and storage medium of content push model
CN116033009A (en) Application pushing method, device, equipment and storage medium
CN115220849A (en) Page display method, page display device, electronic equipment, storage medium and program product

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22766144

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22766144

Country of ref document: EP

Kind code of ref document: A1