CN115225974A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN115225974A
CN115225974A CN202210860757.2A CN202210860757A CN115225974A CN 115225974 A CN115225974 A CN 115225974A CN 202210860757 A CN202210860757 A CN 202210860757A CN 115225974 A CN115225974 A CN 115225974A
Authority
CN
China
Prior art keywords
video
played
rendering
list
identifier
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210860757.2A
Other languages
Chinese (zh)
Other versions
CN115225974B (en
Inventor
杨丹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ping An Life Insurance Company of China Ltd
Original Assignee
Ping An Life Insurance Company of China Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ping An Life Insurance Company of China Ltd filed Critical Ping An Life Insurance Company of China Ltd
Priority to CN202210860757.2A priority Critical patent/CN115225974B/en
Publication of CN115225974A publication Critical patent/CN115225974A/en
Application granted granted Critical
Publication of CN115225974B publication Critical patent/CN115225974B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/47End-user applications
    • H04N21/482End-user interface for program selection
    • H04N21/4825End-user interface for program selection using a list of items to be played back in a given order, e.g. playlists
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The embodiment of the application provides a video processing method, a video processing device, electronic equipment and a storage medium, which belong to the technical field of video processing, and the method comprises the following steps: obtaining a video loading instruction; the video loading instruction comprises a strategy parameter; determining a corresponding target strategy according to the strategy parameters; acquiring a corresponding video data set from a background according to a target strategy, wherein the target strategy corresponds to the video data sets one by one; and rendering the video in the video data set to a playing page. The video processing process is simple, when the front-end page loads the video, the front-end page does not need to be divided into videos from different data platforms, and the development complexity can be reduced.

Description

Video processing method and device, electronic equipment and storage medium
Technical Field
The present application relates to the field of video processing technologies, and in particular, to a video processing method and apparatus, an electronic device, and a storage medium.
Background
In the current short video processing method, a plurality of service modules capable of triggering access to a video playing page are provided, such as a home video list, a my video list, a search, and the like.
However, the video data sets corresponding to different service modules originate from different back-end platforms, and the back-end platforms are not communicated with each other, so that the front end of the video playing page needs to distinguish the service modules before playing the video, and then obtains the corresponding video data sets from the corresponding back-end platforms according to the judgment of the service modules, thereby increasing the complexity of front-end development.
Disclosure of Invention
The present disclosure provides a video processing method, which aims to reduce the complexity of front-end development.
To achieve the above object, a first aspect of an embodiment of the present application provides a video processing method, including:
acquiring a video loading instruction; wherein the video load instruction comprises a policy parameter;
determining a corresponding target strategy according to the strategy parameters;
acquiring a corresponding video data set from a background according to the target strategy; wherein the target policies correspond to the video data sets one to one;
and rendering the video in the video data set to a playing page.
In some embodiments, the rendering the video in the video dataset to a play page comprises:
acquiring a first video identification array from the video data set, wherein the first video identification array comprises a plurality of video identifications, and the video identifications correspond to the videos one by one;
filling each video identifier in the first video identifier array into a to-be-played list;
rendering the video corresponding to the first video identifier in the list to be played to the playing page.
In some embodiments, after the rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method further includes:
and preloading the videos corresponding to other video identifications except the first video identification in the list to be played.
In some embodiments, after the rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method further includes:
and if the current video is played, rendering the video corresponding to the next video identifier in the list to be played to the playing page.
In some embodiments, after the rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method further includes:
monitoring user operation behaviors aiming at the playing page;
and if the user operation behavior is matched with a first preset operation behavior, rendering the video corresponding to the next video identifier in the list to be played to the playing page.
In some embodiments, after the monitoring the user operation behavior for the playing page, the method further comprises:
and if the user operation behavior is matched with a second preset operation behavior, rendering the video corresponding to the last video identifier in the list to be played to the playing page.
In some embodiments, after the rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method further includes:
if the sequence of the video identifiers corresponding to the current video in the list to be played is a preset sequence number, acquiring a second video identifier array from the video data set;
and filling each video identifier in the second video identifier array into the to-be-played list.
To achieve the above object, a second aspect of an embodiment of the present application proposes a video processing apparatus, including:
the instruction acquisition module is used for acquiring a video loading instruction; wherein the video load instruction comprises a policy parameter;
the strategy determining module is used for determining a corresponding target strategy according to the strategy parameters;
the data acquisition module is used for acquiring a corresponding video data set from a background according to the target strategy; wherein the target policies correspond to the video data sets one to one;
and the rendering module is used for rendering the video in the video data set to a playing page.
In order to achieve the above object, a third aspect of the embodiments of the present application provides an electronic device, which includes a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for implementing connection communication between the processor and the memory, wherein the program, when executed by the processor, implements the method of the first aspect.
To achieve the above object, a fourth aspect of the embodiments of the present application proposes a storage medium, which is a computer-readable storage medium for computer-readable storage, and stores one or more programs, which are executable by one or more processors to implement the method of the first aspect.
The video processing method, the video processing device, the electronic equipment and the storage medium are used for obtaining a video loading instruction; the video loading instruction comprises a strategy parameter; determining a corresponding target strategy according to the strategy parameters; acquiring a corresponding video data set from a background according to a target strategy, wherein the target strategy corresponds to the video data sets one to one; and rendering the video in the video data set to a playing page. The video processing process is simple, when the front-end page loads the video, the situation that the front-end page partially receives the video from different data platforms is not needed, and the development complexity can be reduced.
Drawings
Fig. 1 is a schematic flowchart of a video processing method provided in an embodiment of the present application;
fig. 2 is a schematic diagram of a sub-flow of a video processing method provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a sub-flow of a video processing method according to another embodiment of the present application;
fig. 4 is a schematic diagram of a sub-flow of a video processing method according to another embodiment of the present application;
fig. 5 is a schematic diagram of a sub-flow of a video processing method according to another embodiment of the present application;
fig. 6 is a schematic diagram of a sub-flow of a video processing method according to another embodiment of the present application;
fig. 7 is a schematic diagram of a sub-flow of a video processing method according to another embodiment of the present application;
fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 9 is a hardware structure of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more apparent, the present application is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of and not restrictive on the broad application.
It should be noted that although functional blocks are partitioned in a schematic diagram of an apparatus and a logical order is shown in a flowchart, in some cases, the steps shown or described may be performed in a different order than the partitioning of blocks in the apparatus or the order in the flowchart. The terms first, second and the like in the description and in the claims, and the drawings described above, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein is for the purpose of describing embodiments of the present application only and is not intended to be limiting of the application.
It can be understood that, in the related art, in the application software for viewing short videos, there are many service modules in the front end, such as a home video list, a my video list, a search, and the like, which can trigger entry into a video playing page. However, the video data sets corresponding to different service modules originate from different back ends, and these back ends are not communicated with each other, so that when the front end of the video playing page plays a video, the front end needs to distinguish the service modules first and then obtain the corresponding video data sets from the corresponding back end platforms according to the judgment of the service modules, or an intermediate processing end is arranged between the front end and the back end, and is used for distinguishing the service modules and then obtain the corresponding video data sets from the corresponding back end platforms according to the judgment of the service modules, and then transmit the video data sets to the front end page. The two methods have complex processing processes, the first method increases the development complexity of the front end, and the second method increases the development complexity of application software by additionally arranging an intermediate processing end between the front end and the rear end.
Based on this, the application provides a video processing method, a video processing device, an electronic device and a storage medium, wherein the video processing method, the device, the electronic device and the storage medium are used for loading a command by acquiring a video; the video load instruction comprises a policy parameter; determining a corresponding target strategy according to the strategy parameters; acquiring a corresponding video data set from a background according to a target strategy, wherein the target strategy corresponds to the video data sets one to one; and rendering the video in the video data set to a playing page. The video processing process is simple, when the front-end page loads the video, the front-end page does not need to be divided into videos from different data platforms, and the development complexity can be reduced.
The video processing method, the video processing apparatus, the electronic device, and the storage medium provided in the embodiments of the present application are specifically described in the following embodiments, and first, the video processing method in the embodiments of the present application is described.
The embodiment of the application provides a video processing method, and relates to the technical field of video processing. The video processing method provided by the embodiment of the application can be applied to a terminal, a server side and software running in the terminal or the server side. In some embodiments, the terminal may be any terminal having a data processing function and a page display function, such as a mobile device (e.g., a mobile phone, a portable music player, a personal digital assistant, a dedicated messaging device, a portable game device), a desktop computer, an intelligent robot, an intelligent voice interaction device, an intelligent home appliance, and a vehicle-mounted terminal; the server side can be configured into an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and cloud servers for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud functions, cloud storage, network service, cloud communication, middleware service, domain name service, security service, CDN (content delivery network) and big data and artificial intelligence platforms; the software may be an application or the like that implements a video processing method, but is not limited to the above form.
The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable consumer electronics, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like. The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
In each embodiment of the present application, when data related to the user identity or characteristic, such as user information, user behavior data, user history data, and user location information, is processed, permission or consent of the user is obtained, and the data collection, use, and processing comply with relevant laws and regulations and standards of relevant countries and regions. In addition, when the embodiment of the present application needs to acquire sensitive personal information of a user, individual permission or individual consent of the user is obtained through a pop-up window or a jump to a confirmation page, and after the individual permission or individual consent of the user is definitely obtained, necessary user-related data for enabling the embodiment of the present application to operate normally is acquired.
Referring to fig. 1, fig. 1 is a schematic flow chart of a video processing method according to an embodiment of the present disclosure. The video processing method of the embodiment of the present application may include, but is not limited to, step S101, step S102, step S103, and step S104.
Step S101, a video loading instruction is obtained; the video loading instruction comprises strategy parameters;
step S102, determining a corresponding target strategy according to the strategy parameters;
step S103, acquiring a corresponding video data set from a background according to a target strategy; wherein, the target strategies correspond to the video data sets one by one;
and step S104, rendering the video in the video data set to a playing page.
The video processing method can be applied to an application program of a terminal, and a corresponding target strategy is determined according to the strategy parameters in the video loading instruction; and acquiring a corresponding video data set from the background according to the target strategy, and rendering the video in the video data set to a playing page. The video processing process is simple, when the front-end page loads the video, the front-end page does not need to be divided into videos from different data platforms, and corresponding data sets are obtained from the background according to the target strategy, so that the development complexity can be reduced, the coupling degree of the application program and the outside is reduced, the scene applicability of the application program is improved to a certain extent, and more service modules can be accessed.
In step S101 of some embodiments, the video load instruction is generated by being triggered by the front-end page and sent to the back-end by the front-end page, for example, the front-end page includes a search module, and a user searches through the search module, and then the search page generates the video load instruction at this time, and the video load instruction includes the policy parameter. It should be noted that the policy parameters correspond to the service modules of the front-end page one to one, the search module is only one example of one service module of the front-end page and cannot be understood as a limitation of the present application, and the front-end page may further include other service modules, such as a hit video module, a home video list module, a my video list module, a favorite module, a history browsing module, and the like, which is not limited in this embodiment of the present application.
In step S102 of some embodiments, the policy parameters are in one-to-one correspondence with the service modules of the front-end page, for example, the data type of the policy parameters is specifically integer, for example, when the video loading instruction is triggered by the search module, the value of the policy parameter is 1, and when the video loading instruction is triggered by the first-page video list module, the value of the policy parameter is 2. The value of the policy parameter and the corresponding service module are only an example, and are not to be construed as limiting the present application, and those skilled in the art may set the specific corresponding service module according to actual needs.
It should be noted that the data type of the policy parameter mentioned above is specifically an integer, and is not to be understood as a limitation of the present application, and a person skilled in the art may set the data type of the policy parameter according to actual needs.
It should be noted that, in other embodiments, if the video loading instruction does not include the policy parameter, the target policy is determined to be a default policy, and the default policy is one of the target policies, and a person skilled in the art may set one of the target policies as the default policy according to actual needs.
In step S102 and step S103 of some embodiments, a corresponding target policy is determined according to the policy parameters, the target policy is a pre-configured policy, and the target policy corresponds to the policy parameters one by one, and the target policy corresponds to the video data sets one by one, so that the data sets correspond to the service modules one by one. And the target strategy is formulated according to the specific requirements corresponding to the service module, and the video data set is obtained from the background according to the target strategy, wherein each video in the video data set meets the specific requirements corresponding to the service module. For example, if the service module is a hot video module, the corresponding requirement is a video meeting the preset heat, so that the corresponding video data set is obtained from the background through the corresponding target policy, and the heat of each video in the video data set meets the preset heat. In other embodiments, the requirement of the service module is a specified classification, for example, if the requirement of the service module is a video of an action classification, a corresponding video data set is obtained from the background through a corresponding target policy, and each video in the video data set belongs to the action classification. In other embodiments, the requirement of the service module is a video published within a set time range, for example, when the requirement of the service module is a video published in 2021 years, a corresponding video data set is obtained from a background through a corresponding target policy, and each video in the video data set is published in 2021 years. In other embodiments, the service module needs to be a video with a video duration within a certain range, for example, when the service module needs to be a video with a video duration less than 5 minutes, a corresponding video data set is obtained from the background through a corresponding target policy, and the duration of each video in the video data set is less than 5 minutes. It should be noted that the requirement of the service module and the corresponding video data set described above are only an example, and cannot be understood as a limitation of the present application, and the requirement of the service module may also be other requirements, such as a specified time length, etc., which is not limited in the present application, and the target policy corresponds to the requirement of the service module, and the video data set meeting the requirement can be obtained from the background through the corresponding target policy.
The background may include one data platform or a plurality of different data platforms, and the present application can acquire corresponding videos from one data platform or different data platforms through a target policy and form a video data set. Therefore, the front-end page only needs to render the video data set to the playing page without distinguishing different data platforms. It should be noted that the background may also refer to a micro server, such as a service backend, an algorithm backend, and the like, which is not limited in this application.
In step S104 of some embodiments, the video in the video data set is rendered to a play page. Rendering one video in the video data set to a playing page, and setting automatic playing after rendering, for example, after a user clicks a service module of a front end page, jumping from the front end page to the playing page, and since the playing page has been rendered with the video through steps S101 to S104, the playing page is automatically played. Or after rendering the video, detecting a user behavior operation for the playing page, for example, if the user clicks the playing page, playing the rendered video.
Referring to fig. 2, fig. 2 is a schematic view of a sub-process of a video processing method according to some embodiments of the present application. Step S104, rendering the video in the video data set to a play page, which may include, but is not limited to, step S201, step S202, and step S203.
Step S201, a first video identification array is obtained from a video data set, the first video identification array comprises a plurality of video identifications, and the video identifications correspond to videos one by one;
step S202, filling each video identifier in the first video identifier array into a to-be-played list;
step S203, rendering the video corresponding to the first video identifier in the to-be-played list to a playing page.
After a corresponding video data set is obtained from a background according to a target strategy, a first video identification array is obtained from the video data set, each video identification in the first video identification array is filled into a to-be-played list, and a video corresponding to a first video identification in the to-be-played list is rendered to a playing page. In some embodiments, the number of videos in the video data set is large, and in the present application, a first video identifier group is obtained from the video data set, videos corresponding to the first video identifier group are only a part of the video data set, each video identifier in the first video identifier group is filled into a to-be-played list, and then a corresponding video is rendered according to the video identifier in the to-be-played list, so that processing efficiency can be increased.
It should be noted that, the number of the video identifiers in the first video identifier group is not limited by the present application, and those skilled in the art can set the number of the video identifiers in the first video identifier group according to actual needs.
Referring to fig. 3, fig. 3 is a schematic view of a sub-process of a video processing method according to some embodiments of the present application. In step S203, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method may further include, but is not limited to, step S301.
Step S301, preloading videos corresponding to other video identifications except the first video identification in the list to be played.
In step S301 of some embodiments, videos corresponding to other video identifiers except the first video identifier in the to-be-played list are preloaded, so that after the preloading is performed, the efficiency of video rendering can be improved, and when a video is played, the video can be directly watched without waiting for the video loading, so that the waiting time of a user is reduced, and the viewing experience of the user is greatly improved.
Referring to fig. 4, fig. 4 is a schematic view illustrating a sub-flow of a video processing method according to some embodiments of the present application. In step S203, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the method may further include, but is not limited to, step S401.
Step S401, if the current video playing is finished, rendering the video corresponding to the next video identifier in the list to be played to the playing page.
It should be noted that the current video refers to a video currently rendered to a playing page. And if the current video is played, rendering the video corresponding to the next video identifier in the list to be played to a playing page so as to be convenient for playing the video corresponding to the next video identifier.
Referring to fig. 5, fig. 5 is a schematic view illustrating a sub-flow of a video processing method according to some embodiments of the present application. In step S203, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, steps S501 and S502 may also be included, but are not limited thereto.
Step S501, monitoring user operation behaviors aiming at a playing page;
step S502, if the user operation behavior is matched with the first preset operation behavior, rendering the video corresponding to the next video identifier in the list to be played to a playing page.
It is understood that the user operation behavior refers to an operation on a playing page, and may be, for example, a double click, a single click, a sliding motion, or the like on the playing page. For example, the first preset operation behavior may be a downslide action for a playing page, when a user downslide action is performed for the playing page, the user operation behavior is matched with the first preset operation behavior, and a video corresponding to a next video identifier in the to-be-played list is rendered to the playing page, so that the user can play a video corresponding to the next video identifier. It should be noted that, the first preset operation behavior gliding action is described above, which is only an example and is not to be understood as a limitation of the present application, and a person skilled in the art may set the first preset operation behavior according to actual needs, for example, the first preset operation behavior may also be a left-sliding, a right-sliding, a top-sliding, or other actions, and the present application does not limit this.
Referring to fig. 6, fig. 6 is a schematic view of a sub-process of a video processing method according to some embodiments of the present application. After monitoring the user operation behavior for the playing page in step S501, the method further includes, but is not limited to, step S601.
Step S601, if the user operation behavior matches with a second preset operation behavior, rendering a video corresponding to a previous video identifier in the to-be-played list to a playing page.
In step S601 of some embodiments, the second preset operation behavior may be a sliding motion for a playing page, and when the user slides up for the playing page, the user operation behavior is matched with the second preset operation behavior, and then the video corresponding to the last video identifier in the playing list is rendered to the playing page, so that the user can play the video corresponding to the last video identifier. It should be noted that, the above description of the second preset operation behavior sliding down action is only an example, and cannot be understood as a limitation of the present application, and a person skilled in the art may set the second preset operation behavior according to actual needs, for example, the first preset operation behavior may also be a left sliding, a right sliding, a sliding down, or other actions, however, the second preset operation behavior cannot be the same as the first preset operation behavior, and in some embodiments, the first preset operation behavior and the second preset operation behavior are opposite to that, for example, the first preset operation behavior is a sliding down action, and the second preset operation behavior is a sliding up action; or, the first preset operation behavior is a left-sliding motion, and the second preset operation behavior is a right-sliding motion.
It is understood that the video processing method of the embodiment of the present application can be applied to a terminal provided with a display for displaying information input by or provided to a user and various graphic user interfaces of the terminal, which can be configured by graphics, text, icons, video, and any combination thereof. The Display may be a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. For example, the display may be an LCD touch screen, a playing page is displayed through the touch screen, and a user can operate the touch screen, and when the page displayed through the touch screen is the playing page, a user operation behavior for the playing page can be acquired in response to a trigger operation for the touch screen.
Referring to fig. 7, fig. 7 is a schematic view illustrating a sub-flow of a video processing method according to some embodiments of the present application. In step S203, after rendering the video corresponding to the first video identifier in the to-be-played list to the playing page, the method further includes, but is not limited to, step S701 and step S702.
Step S701, if the sequence of the video identifiers corresponding to the current video in the list to be played is a preset sequence number, acquiring a second video identifier array from the video data set;
step S702, each video identifier in the second video identifier array is filled into the to-be-played list.
It can be understood that each video identifier in the second video identifier array is filled into the to-be-played list, so that the video corresponding to the video identifier in the second video identifier array can be preloaded, and when the video corresponding to the first video identifier array is played, the video corresponding to the video identifier in the second video identifier array can be played, so that a user can slide all the time to continuously watch the video, and the user experience is improved. Specifically, for example, the preset sequence number is set to be the second last in the to-be-played list, if the number of the video identifiers in the to-be-played list is 10, the preset sequence number is 8, if the sequence number of the video identifier corresponding to the current video in the to-be-played list is 8, the second video identifier array is obtained from the video data set, each video identifier in the second video identifier array is filled into the to-be-played list, and the video identifiers in the to-be-played list are timely supplemented, so that the video corresponding to the video identifier in the second video identifier array can be played, so that the user can always slide, the video can be continuously watched, and the user experience is improved. It should be noted that, the preset sequence number described above is the second to last in the to-be-played list, specifically the 8 th, and is only an example, and cannot be understood as a limitation of the application, and a person skilled in the art may set the specific preset sequence number according to actual needs. For example, in some embodiments, each video identifier has an index in the to-be-played list, the number of columns of the to-be-played list is 10, the index starts from 0, and the index is specifically 0 to 9, the preset sequence number may be set as the index 8, and if the index of the video identifier corresponding to the current video in the to-be-played list is 8, the second video identifier array is obtained from the video data set, and each video identifier in the second video identifier array is filled into the to-be-played list. For another example, in another embodiment, the number of columns of the to-be-played list is 20, the index starts from 0, the index is specifically 0 to 19, the preset sequence number may be set as an index 15, and if the index of the video identifier corresponding to the current video in the to-be-played list is 15, the second video identifier array is obtained from the video data set, and each video identifier in the second video identifier array is filled into the to-be-played list.
Referring to fig. 8, fig. 8 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present disclosure. An embodiment of the present application provides a video processing apparatus, including:
an instruction obtaining module 801, configured to obtain a video loading instruction; the video loading instruction comprises strategy parameters;
a policy determining module 802, configured to determine a corresponding target policy according to the policy parameter;
a data obtaining module 803, configured to obtain a corresponding video data set from a background according to a target policy; wherein the target strategies correspond to the video data sets one to one;
and a rendering module 804, configured to render the video in the video data set to a play page.
The video processing apparatus according to the embodiment of the application obtains the video loading instruction through the instruction obtaining module 801, then determines the corresponding target policy according to the policy parameter through the policy determining module 802, obtains the corresponding video data set from the background according to the target policy through the data obtaining module 803, and then renders the video in the video data set to the playing page through the rendering module 804. The video processing process is simple, when the front-end page loads the video, the front-end page does not need to be divided into videos from different data platforms, and corresponding data sets are obtained from the background according to the target strategy, so that the development complexity can be reduced, the coupling degree of the application program and the outside is reduced, the scene applicability of the application program is improved to a certain extent, and more accessible service modules are provided.
In some embodiments, the rendering module 804 is further specifically configured to:
acquiring a first video identification array from a video data set, wherein the first video identification array comprises a plurality of video identifications, and the video identifications correspond to the videos one by one;
filling each video identifier in the first video identifier array into a to-be-played list;
and rendering the video corresponding to the first video identifier in the list to be played to a playing page.
In some embodiments, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the rendering module 804 is further configured to:
and preloading videos corresponding to other video identifications except the first video identification in the list to be played.
In some embodiments, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the rendering module 804 is further configured to:
and if the current video is played, rendering the video corresponding to the next video identifier in the list to be played to a playing page.
In some embodiments, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the rendering module 804 is further configured to:
monitoring user operation behaviors aiming at a playing page;
and if the user operation behavior is matched with the first preset operation behavior, rendering the video corresponding to the next video identifier in the list to be played to a playing page.
In some embodiments, the rendering module 804, after monitoring the user operation behavior for the playing page, is further configured to:
and if the user operation behavior is matched with the second preset operation behavior, rendering the video corresponding to the last video identifier in the list to be played to a playing page.
In some embodiments, after rendering the video corresponding to the first video identifier in the to-be-played list to the play page, the rendering module 804 is further configured to:
if the sequence of the video identifiers corresponding to the current video in the list to be played is a preset sequence number, acquiring a second video identifier array from the video data set;
and filling each video identifier in the second video identifier array into the to-be-played list.
It should be noted that the video processing apparatus in the above-mentioned embodiment is based on the same inventive concept as the video processing method in the above-mentioned embodiment, and therefore, the corresponding contents of the video processing method in the above-mentioned embodiment are also applicable to the video processing apparatus in the above-mentioned embodiment, and have the same implementation principle and technical effect, and are not described in detail here to avoid the redundancy of the description contents.
The embodiment of the present application further provides an electronic device, which includes a memory 902, a processor 901, a program stored on the memory 902 and executable on the processor 901, and a data bus 905 for implementing connection communication between the processor 901 and the memory 902, wherein when the program is executed by the processor 901, the steps of the video processing method according to any one of the above embodiments are implemented. The electronic equipment can be any intelligent terminal including a mobile phone, a tablet computer, a vehicle-mounted computer and the like.
Referring to fig. 9, fig. 9 illustrates a hardware structure of an electronic device according to another embodiment, where the electronic device includes:
the processor 901 may be implemented by a general-purpose CPU (Central Processing Unit, CPU 901), a microprocessor 901, an Application Specific Integrated Circuit (ASIC), or one or more Integrated circuits, and is configured to execute related programs to implement the technical solution provided in the embodiment of the present Application; the Processor 901 may be an integrated circuit chip having Signal processing capabilities, such as a general purpose Processor 901, a Digital Signal Processor 901 (DSP), or other programmable logic device, discrete gate or transistor logic device, discrete hardware components, etc., wherein the general purpose Processor 901 may be a microprocessor 901 or any conventional Processor 901, etc.
The Memory 902 may be implemented in the form of a Read Only Memory 902 (ROM), a static storage device, a dynamic storage device, or a Random Access Memory 902 (RAM). The memory 902 may store an operating system and other application programs, and when the technical solution provided by the embodiments of the present specification is implemented by software or firmware, the relevant program codes are stored in the memory 902 and called by the processor 901 to execute the image processing method according to the embodiments of the present application;
an input/output interface 903 for implementing information input and output; the input/output interface 903 may be capable of presenting media content, which may include, for example, one or more speakers and/or one or more visual display screens. The input/output interface 903 is used to enable information input, for example, the input/output interface 903 may include a keyboard, mouse, microphone, touch screen display screen, camera, other input buttons and controls.
A communication interface 904, configured to implement communication interaction between the device and another device, where communication may be implemented in a wired manner (e.g., USB, network cable, etc.), or in a wireless manner (e.g., mobile network, WIFI, bluetooth, etc.);
a bus 905 that transfers information between various components of the device (e.g., the processor 901, the memory 902, the input/output interface 903, and the communication interface 904);
wherein the processor 901, the memory 902, the input/output interface 903 and the communication interface 904 enable a communication connection within the device with each other through a bus 905.
Further, the memory 902, which is a non-transitory computer-readable storage medium, may be used to store non-transitory software programs as well as non-transitory computer-executable programs. The memory 902 may include high-speed random access memory 902 and may also include non-transitory memory 902, such as at least one piece of disk memory 902, flash memory device, or other non-transitory solid state memory 902. In some embodiments, the memory 902 may optionally include memory 902 located remotely from the processor 901, and such remote memory 902 may be coupled to the processor 901 via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The processor 901 implements the video processing method of the above-described embodiments by executing non-transitory software instructions, instructions and signals stored in the memory 902 to thereby apply various functions and data processing.
Non-transitory software instructions and instructions required to implement the video processing method of the above-described embodiment are stored in the memory 902, and when executed by the processor 901, perform the video processing method of the embodiment of the present application, for example, perform the above-described method steps S101 to S104 in fig. 1, method steps S201 to S203 in fig. 2, method step S301 in fig. 3, method step S401 in fig. 4, method steps S501 to S502 in fig. 5, method step S601 in fig. 6, and method steps S701 to S702 in fig. 7.
The present embodiment further provides a storage medium, which is a computer-readable storage medium for computer-readable storage, and the storage medium stores one or more programs, where the one or more programs are executable by one or more processors to implement the steps of the video processing method according to any one of the foregoing embodiments.
The storage medium stores one or more programs, which are executable by one or more processors. For example, the above-described method steps S101 to S104 in fig. 1, method steps S201 to S203 in fig. 2, method step S301 in fig. 3, method step S401 in fig. 4, method steps S501 to S502 in fig. 5, method step S601 in fig. 6, method steps S701 to S702 in fig. 7 are performed.
From the above description of embodiments, those of ordinary skill in the art will appreciate that all or some of the steps, systems, and methods disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof. Some or all of the physical components may be implemented as software executed by a processor, such as a central processing unit, digital signal processor, or microprocessor, or as hardware, or as an integrated circuit, such as an application specific integrated circuit. Such software may be distributed on readable media, which may include computer storage media (or non-transitory media) and communication media (or transitory media). The term computer storage media includes volatile and nonvolatile, removable and non-removable media implemented in any method or technology for storage of information such as computer readable signals, data structures, instruction modules or other data, as is well known to those of ordinary skill in the art. Computer storage media includes, but is not limited to, RAM, ROM, EEPROM, flash memory or other memory technology, CD-ROM, digital Versatile Disks (DVD) or other optical disk storage, magnetic cassettes, magnetic tape, magnetic disk storage or other magnetic storage devices, or any other medium which can be used to store the desired information and which can accessed by a computer. In addition, communication media typically embodies computer-readable signals, data structures, instruction modules or other data in a modulated data signal such as a carrier wave or other transport mechanism and includes any information delivery media as known to those skilled in the art.
The above-described embodiments of the apparatus are merely illustrative, wherein the units illustrated as separate components may or may not be physically separate, i.e. may be located in one place, or may also be distributed over a plurality of network elements. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
The embodiments described in the embodiments of the present application are for more clearly illustrating the technical solutions of the embodiments of the present application, and do not constitute a limitation to the technical solutions provided in the embodiments of the present application, and it is obvious to those skilled in the art that the technical solutions provided in the embodiments of the present application are also applicable to similar technical problems with the evolution of technology and the emergence of new application scenarios.
The video processing method, the video processing device, the electronic equipment and the storage medium are used for obtaining a video loading instruction; the video loading instruction comprises a strategy parameter; determining a corresponding target strategy according to the strategy parameters; acquiring a corresponding video data set from a background according to a target strategy, wherein the target strategy corresponds to the video data sets one by one; and rendering the video in the video data set to a playing page. The video processing process is simple, when the front-end page loads the video, the front-end page does not need to be divided into videos from different data platforms, and the development complexity can be reduced.
It will be appreciated by those skilled in the art that the solutions shown in fig. 1-9 are not intended to limit the embodiments of the present application and may include more or fewer steps than those shown, or some of the steps may be combined, or different steps may be included.
One of ordinary skill in the art will appreciate that all or some of the steps of the methods, systems, functional modules/units in the devices disclosed above may be implemented as software, firmware, hardware, and suitable combinations thereof.
The terms "first," "second," "third," "fourth," and the like in the description of the application and the above-described figures, if any, are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used is interchangeable under appropriate circumstances such that the embodiments of the application described herein are capable of operation in sequences other than those illustrated or described herein. Furthermore, the terms "comprises," "comprising," and "having," and any variations thereof, are intended to cover a non-exclusive inclusion, such that a process, method, system, article, or apparatus that comprises a list of steps or elements is not necessarily limited to those steps or elements expressly listed, but may include other steps or elements not expressly listed or inherent to such process, method, article, or apparatus.
It should be understood that in the present application, "at least one" means one or more, "a plurality" means two or more. "and/or" for describing an association relationship of associated objects, indicating that there may be three relationships, e.g., "a and/or B" may indicate: only A, only B and both A and B are present, wherein A and B may be singular or plural. The character "/" generally indicates that the former and latter associated objects are in an "or" relationship. "at least one of the following" or similar expressions refer to any combination of these items, including any combination of single item(s) or plural items. For example, at least one (one) of a, b, or c, may represent: a, b, c, "a and b", "a and c", "b and c", or "a and b and c", wherein a, b, c may be single or plural.
In the several embodiments provided in the present application, it should be understood that the disclosed apparatus and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the above-described division of units is only one type of division of logical functions, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit may be implemented in the form of hardware, or may also be implemented in the form of a software functional unit.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a separate product, may be stored in a computer readable storage medium. Based on such understanding, the technical solution of the present application may be substantially implemented or contributed to by the prior art, or all or part of the technical solution may be embodied in a software product, which is stored in a storage medium and includes multiple instructions for causing a computer device (which may be a personal computer, a server, or a network device) to perform all or part of the steps of the method of the embodiments of the present application.
The preferred embodiments of the present application have been described above with reference to the accompanying drawings, and the scope of the claims of the embodiments of the present application is not limited thereto. Any modifications, equivalents and improvements that may occur to those skilled in the art without departing from the scope and spirit of the embodiments of the present application are intended to be within the scope of the claims of the embodiments of the present application.

Claims (10)

1. A method of video processing, the method comprising:
acquiring a video loading instruction; wherein the video load instruction comprises a policy parameter;
determining a corresponding target strategy according to the strategy parameters;
acquiring a corresponding video data set from a background according to the target strategy; wherein the target policies correspond to the video data sets one to one;
and rendering the video in the video data set to a playing page.
2. The method of claim 1, wherein the rendering the video in the video data set to a play page comprises:
acquiring a first video identification array from the video data set, wherein the first video identification array comprises a plurality of video identifications, and the video identifications correspond to the videos one by one;
filling each video identifier in the first video identifier array into a to-be-played list;
rendering the video corresponding to the first video identifier in the list to be played to the playing page.
3. The video processing method according to claim 2, wherein after the rendering the video corresponding to the first video identifier in the to-be-played list to the playing page, the method further comprises:
and preloading the videos corresponding to other video identifications except the first video identification in the list to be played.
4. The video processing method according to claim 2, wherein after the rendering the video corresponding to the first video identifier in the to-be-played list to the playing page, the method further comprises:
and if the current video is played, rendering the video corresponding to the next video identifier in the list to be played to the playing page.
5. The video processing method according to claim 2, wherein after the rendering the video corresponding to the first video identifier in the to-be-played list to the playing page, the method further comprises:
monitoring user operation behaviors aiming at the playing page;
and if the user operation behavior is matched with a first preset operation behavior, rendering the video corresponding to the next video identifier in the list to be played to the playing page.
6. The video processing method of claim 5, further comprising, after the monitoring the user operation behavior with respect to the playing page:
and if the user operation behavior is matched with a second preset operation behavior, rendering the video corresponding to the last video identifier in the list to be played to the playing page.
7. The video processing method according to claim 3, wherein after the rendering the video corresponding to the first video identifier in the to-be-played list to the playing page, further comprising:
if the sequence of the video identifiers corresponding to the current video in the list to be played is a preset sequence number, acquiring a second video identifier array from the video data set;
and filling each video identifier in the second video identifier array into the to-be-played list.
8. A video processing apparatus, characterized in that the apparatus comprises:
the instruction acquisition module is used for acquiring a video loading instruction; wherein the video load instruction comprises a policy parameter;
the strategy determining module is used for determining a corresponding target strategy according to the strategy parameters;
the data acquisition module is used for acquiring a corresponding video data set from a background according to the target strategy; wherein the target policies correspond to the video data sets one to one;
and the rendering module is used for rendering the video in the video data set to a playing page.
9. An electronic device, characterized in that the electronic device comprises a memory, a processor, a program stored on the memory and executable on the processor, and a data bus for enabling a connection communication between the processor and the memory, the program, when executed by the processor, implementing the steps of the video processing method according to any one of claims 1 to 7.
10. A storage medium, which is a computer-readable storage medium, for computer-readable storage, characterized in that the storage medium stores one or more programs, which are executable by one or more processors to implement the steps of the video processing method according to any one of claims 1 to 7.
CN202210860757.2A 2022-07-21 2022-07-21 Video processing method, device, electronic equipment and storage medium Active CN115225974B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210860757.2A CN115225974B (en) 2022-07-21 2022-07-21 Video processing method, device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210860757.2A CN115225974B (en) 2022-07-21 2022-07-21 Video processing method, device, electronic equipment and storage medium

Publications (2)

Publication Number Publication Date
CN115225974A true CN115225974A (en) 2022-10-21
CN115225974B CN115225974B (en) 2024-04-05

Family

ID=83612973

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210860757.2A Active CN115225974B (en) 2022-07-21 2022-07-21 Video processing method, device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115225974B (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636367A (en) * 2019-07-12 2019-12-31 北京无限光场科技有限公司 Video loading method and device, terminal equipment and medium
CN110807128A (en) * 2019-10-25 2020-02-18 北京达佳互联信息技术有限公司 Video preloading method, device, equipment and storage medium
CN112416461A (en) * 2020-11-25 2021-02-26 百度在线网络技术(北京)有限公司 Video resource processing method and device, electronic equipment and computer readable medium
CN112423125A (en) * 2020-11-20 2021-02-26 上海哔哩哔哩科技有限公司 Video loading method and device
CN112954440A (en) * 2021-02-09 2021-06-11 北京字节跳动网络技术有限公司 Video processing method, device, equipment and storage medium
CN113949935A (en) * 2021-12-03 2022-01-18 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment, video processing medium and video processing product
CN114697752A (en) * 2022-03-30 2022-07-01 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110636367A (en) * 2019-07-12 2019-12-31 北京无限光场科技有限公司 Video loading method and device, terminal equipment and medium
CN110807128A (en) * 2019-10-25 2020-02-18 北京达佳互联信息技术有限公司 Video preloading method, device, equipment and storage medium
CN112423125A (en) * 2020-11-20 2021-02-26 上海哔哩哔哩科技有限公司 Video loading method and device
CN112416461A (en) * 2020-11-25 2021-02-26 百度在线网络技术(北京)有限公司 Video resource processing method and device, electronic equipment and computer readable medium
CN112954440A (en) * 2021-02-09 2021-06-11 北京字节跳动网络技术有限公司 Video processing method, device, equipment and storage medium
CN113949935A (en) * 2021-12-03 2022-01-18 北京达佳互联信息技术有限公司 Video processing method, video processing device, electronic equipment, video processing medium and video processing product
CN114697752A (en) * 2022-03-30 2022-07-01 北京字跳网络技术有限公司 Video processing method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN115225974B (en) 2024-04-05

Similar Documents

Publication Publication Date Title
CN107888981B (en) Audio and video preloading method, device, equipment and storage medium
US11112942B2 (en) Providing content via multiple display devices
CN110673772B (en) Page returning method and device, electronic equipment and computer readable medium
US9141682B1 (en) Resolving conflicts within saved state data
US11474926B2 (en) Method and system for measuring user engagement with content items
CN107885823B (en) Audio information playing method and device, storage medium and electronic equipment
US20170171334A1 (en) Single-account multiple-preference recommendation method for video website and electronic device
CN107562432B (en) Information processing method and related product
CN110249324B (en) Maintaining session identifiers for content selection across multiple web pages
CN110704677A (en) Program recommendation method and device, readable storage medium and terminal equipment
US9471669B2 (en) Presenting previously selected search results
JP5264813B2 (en) Evaluation apparatus, evaluation method, and evaluation program
CN109358927B (en) Application program display method and device and terminal equipment
CN110798701A (en) Video update pushing method and terminal
CN105005612A (en) Music file acquisition method and mobile terminal
US20210026913A1 (en) Web browser control feature
US10872356B2 (en) Methods, systems, and media for presenting advertisements during background presentation of media content
CN115225974B (en) Video processing method, device, electronic equipment and storage medium
CN110909221A (en) Resource display method and related device
CN107368558B (en) Data object returning method and device
CN107480269B (en) Object display method and system, medium and computing equipment
CN110968819A (en) Data processing method, device and machine readable medium
US20170123629A1 (en) Icon sequencing method and device for intelligent television desktop
CN110882539B (en) Animation display method and device, storage medium and electronic device
CN112533032B (en) Video data processing method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant