CN111432265B - Method for processing video pictures, related device and storage medium - Google Patents

Method for processing video pictures, related device and storage medium Download PDF

Info

Publication number
CN111432265B
CN111432265B CN202010245048.4A CN202010245048A CN111432265B CN 111432265 B CN111432265 B CN 111432265B CN 202010245048 A CN202010245048 A CN 202010245048A CN 111432265 B CN111432265 B CN 111432265B
Authority
CN
China
Prior art keywords
interface control
playing
interface
video
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010245048.4A
Other languages
Chinese (zh)
Other versions
CN111432265A (en
Inventor
何鑫
邱良雄
梁沁
林锦涛
张晓文
李相如
吴桂盛
李煜彬
刘宜鑫
周宇
胡玮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN202010245048.4A priority Critical patent/CN111432265B/en
Publication of CN111432265A publication Critical patent/CN111432265A/en
Application granted granted Critical
Publication of CN111432265B publication Critical patent/CN111432265B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F8/00Arrangements for software engineering
    • G06F8/30Creation or generation of source code
    • G06F8/38Creation or generation of source code for implementing user interfaces
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/431Generation of visual interfaces for content selection or interaction; Content or additional data rendering
    • H04N21/4312Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Software Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • User Interface Of Digital Computer (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Television Signal Processing For Recording (AREA)

Abstract

The embodiment of the application provides a method for processing video pictures, a related device and a storage medium, wherein the method comprises the following steps: and at the front-end platform, disassembling the first frame data of the target video into video playing data and service data, generating a first frame picture based on the video playing data and the rendered service data, and sending the first frame picture to the terminal. The terminal acquires a first frame of picture of a target video, associates video playing data to a first interface control and associates service data to a second interface control; the first interface control is used for controlling the playing assembly to play the playing logic of the video playing data; the second interface control is used for controlling the display of the service data on the user interface and controlling service logic; and generating a target video picture from the first frame picture through the first interface control and the second interface control. The scheme can accelerate the generation speed and the playing speed of the target video picture.

Description

Method for processing video pictures, related device and storage medium
Technical Field
The embodiment of the application relates to the technical field of video processing, in particular to a method for processing video pictures, a related device and a storage medium.
Background
The mobile terminal dynamic framework technology is a new technology that a developer can use a front-end development technology to develop a mobile terminal native application, and the basic principle is that a mobile terminal bottom layer provides a bridge which is communicated with a front end, the bridge receives a command transmitted by the front end to generate various mobile terminal specific View elements, and because of the dynamic property of the front end, a generated mobile terminal page also has dynamic property. Based on the product requirements of the mobile terminal, the player itself is usually packaged into a View element related to the UI, and a pure player is added to the View element to satisfy different service scenarios, such as service logics of video title display, display hiding of a playing logo, volume bar control, and the like.
In the research and practice process of the prior art, the inventor of the embodiment of the application finds that, in a scenario where a player packages View elements related to a UI, in a dynamic framework of a mobile terminal, on one hand, an instruction for drawing the UI transmitted from a framework side is received, and on the other hand, a play-related logic is executed, which results in an increase in performance loss of the terminal.
Disclosure of Invention
The embodiment of the application provides a method for processing video pictures, a related device and a storage medium, which can accelerate the generation speed and the playing speed of a target video picture and improve the performance of a client.
In a first aspect, an embodiment of the present application provides a method for processing a video picture, where the method includes:
acquiring a first frame of picture of a target video, wherein the first frame of picture comprises video playing data and rendered service data;
associating the video playing data to a first interface control and the business data to a second interface control; the first interface control is used for controlling a playing assembly to play the playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic;
and generating a target video picture from the first frame picture through the first interface control and the second interface control.
In one possible design, after generating the first frame picture into the target video picture, the method further includes:
displaying video playing data in the target video picture through the first interface control, and displaying service data in the target video picture through the second interface control;
receiving a first message of a user, wherein the first message is used for requesting to play the target video;
and starting the playing component through the first interface control so as to control the playing component to play the target video picture.
In a possible design, the method is implemented based on a page control frame, the first interface control and the second interface control belong to the same parent interface control, and before the first frame picture is generated into a target video picture, the method further includes:
associating the first interface control with the play component;
setting a calling mode and a communication mode between the parent interface control and the first interface control;
the starting the playing component through the first interface control to display the target video picture and receive a first message of a user, including:
calling the first interface control through the parent interface control to start the playing component to display the target video picture;
receiving the first message through the parent interface control.
In one possible design, the page control frame further includes an abstract class, and the method further includes:
receiving a second message, wherein the second message is used for requesting to amplify and display a first playing interface of the target video;
moving the first interface control from the parent interface control to the abstract class;
updating the size of the first playing interface to the size setting information in the abstract class;
displaying the first playing interface by calling the abstract class;
rotating the first playing interface to generate a second playing interface;
and displaying the second playing interface.
In one possible design, the moving the first interface control from the parent interface control to an abstract class includes:
removing the first interface control from the parent interface control, and recording an initial index and initial layout information of the first interface control in the parent interface control;
clearing the initial layout information arranged on the first interface control;
adding the first interface control to the abstract class.
In one possible design, the updating the size of the first playback interface to the size setting information in the abstract class includes:
setting new layout information for the first interface control;
setting the target size of the first playing interface according to the size setting information in the parent interface control;
and generating the second playing interface according to the new layout information and the target size.
In one possible design, after the receiving the second message and before the rotating the first playing interface, the method further includes:
moving the second interface control from the parent interface control to the abstract class;
setting the second interface control to be invisible.
In one possible design, the page control frame further includes an abstract class, and the method further includes:
receiving a third message, wherein the third message is used for requesting the first playing interface of the target video to be displayed in a reduced mode;
moving the first interface control from the abstract class into the parent interface control;
displaying the first playing interface by calling the first interface control;
rotating the first playing interface to generate a second playing interface;
and displaying the second playing interface.
In one possible design, after the receiving the third message and before the rotating the first playback interface, the method further includes:
moving the second interface control from the abstract class to an initial level of the parent interface control;
setting the second interface control to be visible.
In one possible design, the target video picture is stored on a blockchain node.
In a second aspect, an embodiment of the present application provides a method for processing a video picture, where the method includes:
acquiring first frame data from a target video to be processed;
acquiring video playing data and service data from the first frame data;
rendering the service data;
generating a first frame picture according to the video playing data and the rendered service data;
and sending the target video comprising the first frame picture to a terminal.
In one possible design, the rendering the business data includes:
splitting the service data into at least one class of data;
setting UI (user interface) labels for various types of data according to the types of the data;
generating at least one video profile from the data of the at least one class;
and rendering the at least one video introduction respectively to obtain a service UI element, wherein the service UI element is used for displaying service forms and service logics of services.
In one possible design, the first frame picture is stored on a blockchain node.
In a third aspect, an embodiment of the present application provides a video picture processing apparatus having a function of implementing a method for processing a video picture corresponding to the first aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In one possible design, the video picture processing apparatus includes:
the input and output module is used for acquiring a first frame of picture of the target video, wherein the first frame of picture comprises video playing data and rendered service data;
the processing module is used for associating the video playing data acquired by the input and output module to a first interface control and associating the service data to a second interface control; generating a target video picture from the first frame picture through the first interface control and the second interface control, wherein the first interface control is used for controlling a playing assembly to play playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic.
In a fourth aspect, an embodiment of the present application provides a video picture processing apparatus having a function of implementing a method for processing a video picture corresponding to the second aspect. The functions can be realized by hardware, and the functions can also be realized by executing corresponding software by hardware. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware.
In one possible design, the video picture processing apparatus includes:
the processing module is used for acquiring first frame data from a target video to be processed; acquiring video playing data and service data from the first frame data;
the rendering module is used for rendering the service data acquired by the processing module;
the processing module is further used for generating a first frame picture according to the video playing data and the rendered service data;
and the input and output module is used for sending the target video comprising the first frame picture to a terminal.
In yet another aspect, the present invention provides a video picture processing apparatus, which includes at least one connected processor, a memory and an input/output unit, wherein the memory is used for storing a computer program, and the processor is used for calling the computer program in the memory to execute the method in the above aspects.
Yet another aspect of the embodiments of the present application provides a computer-readable storage medium including instructions that, when executed on a computer, cause the computer to perform the method of the above aspects.
Compared with the prior art, in the scheme provided by the embodiment of the application, since the service data and the video playing data in the first frame of picture acquired from the front-end platform are logically separated and the service data is in the rendering state, a part of rendering operation which should be executed on the client is transferred, and therefore the performance of the client can be improved. In addition, because the client side uses separate interface controls for video playing data and service data respectively, the client side extracts redundant codes related to the service, on one hand, the first interface control only focuses on playing logic, and avoids focusing on other unrelated UI operations, namely, the UI related operations and the operating logic of the player are separated. On the other hand, the terminal side does not need to execute UI-related rendering instructions. Further reducing performance loss and accelerating the generation speed and the play-starting speed of the video card.
Drawings
Fig. 1 is a schematic diagram of an interaction between a front-end platform and a terminal in an embodiment of the present application;
FIG. 2 is a flowchart illustrating a method for processing video frames according to an embodiment of the present application;
FIG. 3 is a flowchart illustrating a method for processing video frames according to an embodiment of the present application;
FIG. 4 is a schematic interface diagram of a target video frame according to an embodiment of the present disclosure;
FIG. 5a is a schematic structural diagram of a parent interface control in an embodiment of the present application;
FIG. 5b is a schematic structural diagram of a page control frame according to an embodiment of the present application;
fig. 6a is a schematic flow chart illustrating an enlarged display of a play interface in the embodiment of the present application;
FIG. 6b is a schematic interface diagram illustrating comparison between before and after the playing interface is enlarged and displayed in the embodiment of the present application;
FIG. 7a is a schematic structural diagram of a video view in a page control frame before moving in an embodiment of the present application;
FIG. 7b is a schematic structural diagram of a moved video view in a page control frame according to an embodiment of the present application;
fig. 8 is a schematic flow chart illustrating a reduced display of a play interface in the embodiment of the present application;
FIG. 9 is a schematic diagram illustrating a comparison between a current solution and a solution in a development cycle of a client according to an embodiment of the present application;
FIG. 10 is a diagram illustrating a comparison between the average playing time of the first frame of the current scheme and the average playing time of the present scheme according to an embodiment of the present application;
FIG. 11 is a block chain system according to an embodiment of the present application;
FIG. 12 is a schematic diagram of a video frame processing apparatus according to an embodiment of the present application;
FIG. 13 is a schematic diagram of a video frame processing apparatus according to an embodiment of the present application;
FIG. 14 is a block diagram of an embodiment of a video frame processing apparatus;
fig. 15 is a schematic structural diagram of a terminal in an embodiment of the present application;
fig. 16 is a schematic structural diagram of a front end platform in an embodiment of the present application.
Detailed Description
The terms "first," "second," and the like in the description and in the claims of the embodiments of the application and in the drawings described above are used for distinguishing between similar elements and not necessarily for describing a particular sequential or chronological order. It will be appreciated that the data so used may be interchanged under appropriate circumstances such that the embodiments described herein may be practiced otherwise than as specifically illustrated or described herein. Furthermore, the terms "comprise" and "have," and any variations thereof, are intended to cover non-exclusive inclusions, such that a process, method, system, article, or apparatus that comprises a list of steps or modules is not necessarily limited to those steps or modules expressly listed, but may include other steps or modules not expressly listed or inherent to such process, method, article, or apparatus, such that the division of modules presented in the present application is merely a logical division and may be implemented in a practical application in a different manner, such that multiple modules may be combined or integrated into another system or some features may be omitted or not implemented, and such that couplings or direct couplings or communicative connections shown or discussed may be through interfaces, indirect couplings or communicative connections between modules may be electrical or the like, the embodiments of the present application are not limited. Moreover, the modules or sub-modules described as separate components may or may not be physically separated, may or may not be physical modules, or may be distributed in a plurality of circuit modules, and some or all of the modules may be selected according to actual needs to achieve the purpose of the embodiments of the present application.
The embodiment of the application provides a method for processing video pictures, a related device and a storage medium, which can be used for application scenes such as online playing and local playing. The scheme can be used for the terminal side, and the terminal side can be used for generating and displaying the first frame picture to be played. In the embodiment of the present application, only a terminal is taken as an example, a client (which may also be referred to as a video picture processing device, and may be a video client or an audio/video client, which is not limited in this embodiment) may be deployed at a terminal side, and a terminal in the embodiment of the present application may be a node in a blockchain system. In some embodiments, the embodiments of the present application mainly relate to a front-end platform and a terminal, and an interaction diagram of the front-end platform and the terminal is shown in fig. 1 and described in detail below.
As shown in fig. 1, a target video to be processed is disassembled on a front-end platform to obtain video playing data and service data, a script code is written to generate a series of rendering instructions related to a UI, the service data is rendered, a first frame of picture is generated according to the video playing data and the service data, and then the target video containing the first frame of picture is sent to a client installed in a terminal through a bridge in a page control frame at a bottom layer of the terminal. And the video view in the client is associated with the playing component, and the service view and the video view are controlled by the page control component.
It should be particularly noted that the front-end platform according to the embodiment of the present application refers to a front-end portion of a website, and is operated on a browser, such as a computer, a mobile terminal, or the like, to display a webpage browsed by a user. The front-end platform may be a server or a web server.
A terminal as referred to in embodiments of the present application may refer to a device providing voice and/or data connectivity to a user, a handheld device having wireless connectivity, or other processing device connected to a wireless modem. Such as mobile telephones (or "cellular" telephones) and computers with mobile terminals, such as portable, pocket, hand-held, computer-included, or vehicle-mounted mobile devices, that exchange voice and/or data with a radio access network. Examples of such devices include Personal Communication Service (PCS) phones, cordless phones, Session Initiation Protocol (SIP) phones, Wireless Local Loop (WLL) stations, and Personal Digital Assistants (PDA).
The embodiment of the application mainly provides the following technical scheme:
firstly, from the aspects of improving the performance of the terminal and generating the first frame card
The front-end platform disassembles the first frame data in the video a to be played to obtain video playing data and service data, renders the service data, generates a first frame image based on the rendered service data and video playing data, and sends the video a' containing the first frame image to the terminal. The terminal obtains a video a' to be played from a front-end platform, two independent interface controls are respectively adopted to control service data and video playing data in a first frame of picture, and further when a first frame of card (namely a target video picture for providing a playing inlet for a user) is generated, the terminal can pay more attention to processing of the video playing data, so that the performance loss of the terminal can be reduced, and the generation speed of the first frame of card can be increased.
Second, zooming from video picture
For example, a full-screen display and a full-screen exit display of a played video picture are taken as examples. The method can be realized by clicking icon options, dragging, grabbing and the like, and the method is not limited in the embodiment of the application.
The embodiment of the present application mainly relates to the following scenarios:
(1) the scene for zooming out the video interface comprises the following steps:
a. and displaying the interface without any reduction or enlargement in a reduction mode.
b. And displaying the zoomed video interface in a zooming-out mode.
c. And displaying the interface which is already reduced in a reducing way. Wherein, the point c includes: the reduction is performed 1 time, but not the first reduction.
(2) The scene for amplifying the video interface comprises the following steps:
a. and displaying the interface without any reduction or enlargement in an enlarging way.
b. And amplifying and displaying the amplified video interface. Wherein, the b-th point comprises: amplification was performed 1 time, not the first amplification.
c. And carrying out amplification display on the video interface which is already reduced.
Referring to fig. 2, a method for processing video pictures provided in an embodiment of the present application is described below from a front-end platform perspective, where the embodiment of the present application includes:
201. and acquiring first frame data from a target video to be processed.
And the target video is a video to be transmitted to the terminal for online playing. The first frame data comprises video playing data and service data, and the first frame data is the video data in an initial state and does not logically separate the video playing data from the service data. The first frame of data may also be referred to as a video card, a first frame of picture, etc., and this is not limited in this embodiment of the present application.
The service data refers to service data related to video, and for example, the service data includes a title, a head portrait, a playing amount, a number of comments, and the like.
202. And acquiring video playing data and service data from the first frame data.
In some embodiments, the video playing data and the service data may be obtained by disassembling the first frame data. The acquired service data can be independently set with a front-end UI label.
203. And rendering the service data.
In some embodiments, in addition, the service data is further disassembled into single class data, and the single class data is respectively set to the UI labels of the front ends. Specifically, the rendering the service data includes:
splitting the service data into at least one class of data;
setting UI (user interface) labels for various types of data according to the types of the data;
generating at least one video profile from the data of the at least one class;
and rendering the at least one video introduction respectively to obtain a service UI element, wherein the service UI element is used for displaying service forms and service logics of services.
For example, the Image tag of the head portrait picture data is set to the front end, and the text tag of the playback data is set to the front end to implement complex business logic.
In some embodiments, native rendering engines and UI frameworks can be used for rendering the service data, or Flutter can be used for rendering the service data, and the embodiment of the present application does not limit the rendering mode and rendering tool of the service data.
Therefore, at the front-end platform side, the service data and the video playing data are disassembled mutually. Therefore, when the terminal side generates the target video picture based on the video playing data and the rendered service data, the video playing data can be set to the video view only, so that the video view only focuses on the playing logic, and the generation speed of the target video picture is accelerated. In addition, since the service data is disassembled into the single class of data again and is respectively set to the UI labels of the front end, the rendering operation can be realized on the front end platform side, so that the first interface control does not need to execute the rendering instruction related to the UI from the front end platform, i.e. the terminal side does not need to process the redundant service codes, and therefore, the performance loss of the terminal can be effectively reduced, and the logic of playing the video playing data by the slow playing component due to the execution of the rendering instruction can be avoided. Moreover, by disassembling and rendering the first frame data, the video view at the terminal side can pay more attention to the playing logic of the video playing data, and redundant service codes do not need to be processed, so that the generation speed and the play starting speed of the target video picture are increased.
204. And generating a first frame picture according to the video playing data and the rendered service data.
The first frame of picture is a video picture obtained by logically separating video playing data and service data, and a first frame picture schematic diagram shown in fig. 4 can be referred to, and a user can enter a playing interface for playing the target video through clicking operation.
205. And sending the target video comprising the first frame picture to a terminal.
Compared with the prior art, in the embodiment of the application, the video playing data and the service data in the original first frame data are logically separated, then the service data are rendered, and the first frame picture is generated according to the video playing data and the rendered service data. On one hand, the complexity of processing the first frame data by the terminal side can be reduced by logically separating the video playing data and the service data in the original first frame data, on the other hand, the service data is rendered, namely, the rendering operation is realized on a front-end platform, the terminal side pays more attention to the playing logic when processing the first frame data, and because no UI element is required to be rendered, the performance loss can be reduced, the generation speed of the first frame picture can be improved, and the time for a user to see the first frame picture can be further reduced.
Referring to fig. 3, a method for processing video pictures provided in the embodiment of the present application is described below from the perspective of a terminal side, where the method is executed by the terminal side, and in the embodiment corresponding to fig. 2, after a first frame of data of the target video is processed to obtain a target video including the first frame of picture, and after the target video is acquired by the terminal side, the first frame of picture in the target video is processed to be played on the terminal side. The target video in the embodiment of the application may be played online or locally at the terminal side, and the embodiment of the application does not limit the acquisition channel and the acquisition mode of the target video. The embodiment of the application is only an example of the interactive scene of online playing between the front-end platform and the terminal, and other playing scenes are not limited. Specifically, the embodiment of the present application includes:
301. and acquiring a first frame picture of the target video.
The first frame picture comprises video playing data and rendered service data.
The target video is a video to be transmitted to the terminal for online playing. The first frame data comprises video playing data and service data, and the first frame data is the video data in an initial state and does not logically separate the video playing data from the service data. The first frame of data may also be referred to as a video card, a first frame of picture, etc., and this is not limited in this embodiment of the present application.
The service data refers to service data related to video, and for example, the service data includes a title, a head portrait, a playing amount, a number of comments, and the like.
It should be noted that the target video in the embodiment of the present application may be played online or locally played at the terminal side, and the embodiment of the present application does not limit the acquisition channel and the acquisition mode of the target video.
302. And associating the video playing data to a first interface control and associating the service data to a second interface control.
The first interface control is used for controlling a playing component to play the playing logic of the video playing data, for example, controlling operations of a player such as chaining, playing, pausing and resuming.
The second interface control is used for controlling the display of the business data on the user interface and controlling business logic, such as the display logic of the business data of playing amount, popularity, comment number, video tags, author information and the like. The second interface control is business view. The second interface control bears the service UI elements to be displayed, such as a play button, a play amount, a title and the like.
For services, a player mainly focuses on and provides playing requirements (i.e., provides functions of playing, pausing, resuming playing, etc.), and cannot well show specific service forms of video in playing (e.g., show service data such as video heat, total playing amount of video, number of video comments, etc.), but if the service data is included in video view, the originally complex player logic is more complex, so that the embodiment of the present application adopts a new playing service showing component to meet the requirements of playing and service supplement, and easy subsequent expansion and development. The play business presentation component can include a first interface control (i.e., dedicated to controlling the play logic) and a second interface control (i.e., for controlling the business logic).
In some embodiments of the present application, the method may be implemented based on a page control frame, where the page control frame includes a parent interface control, the first interface control, the second interface control, and the playing component. And the first interface control and the second interface control belong to the same father interface control. The page control frame may be a dynamic frame, such as a dynamic frame like Viola, Weex, or Hippy. Fig. 5a shows a schematic structural diagram of a parent interface control framework, where fig. 5a discloses an affiliation between a parent interface control (i.e., parent view), a video view (i.e., first interface control), and a service view (i.e., second interface control).
Splitting the first frame of picture, splitting video playing data and service data, respectively associating the video playing data and the service data with a video view (namely a first interface control) and a service view (namely a second interface control), and taking the service view as a brother node of the video view by default. For the layout sequence of the parent view common to both the video view and the service view, the layout of the service view covers the upper layer of the video view, and the service view is convenient for showing specific service logic or realizing floating operation bar logic and the like.
In some embodiments, after the interface control frame is obtained, before the first frame picture is generated into the target video picture, a calling relationship, a calling mode and a communication mode between the parent control and the view control may be further set, so that the first interface control is more focused on the logic control of the playing component, the decoupling of the playing logic and the service logic, and the interaction with the front-end platform in the playing process is realized. Specifically, the method further comprises:
associating the first interface control with the play component;
and setting a calling mode and a communication mode between the parent interface control and the first interface control.
303. And generating a target video picture from the first frame picture through the first interface control and the second interface control.
The target video picture is a video picture providing a playing inlet for a user, and the user can trigger the playing assembly to play the playing logic of the target video by clicking the video picture.
Compared with the prior art, in the embodiment of the application, since the service data and the video playing data in the first frame of picture acquired from the front-end platform are logically separated and the service data is in the rendering state, a part of rendering operation which should be executed on the client is transferred, and therefore the performance of the client can be improved. In addition, because the client side uses separate interface controls for video playing data and service data respectively, the client side extracts redundant codes related to the service, on one hand, the first interface control only focuses on playing logic, and avoids focusing on other unrelated UI operations, namely, the UI related operations and the operating logic of the player are separated. On the other hand, the terminal side does not need to execute UI-related rendering instructions. Further reducing performance loss and accelerating the generation speed and the play-starting speed of the video card.
In some embodiments, after the first frame picture is generated into the target video picture, the target video picture can be played to be rapidly displayed to the user. The method further comprises the following steps:
displaying video playing data in the target video picture through the first interface control, and displaying service data in the target video picture through the second interface control;
receiving a first message of a user, wherein the first message is used for requesting to play the target video;
and starting the playing component through the first interface control so as to control the playing component to play the target video picture.
Therefore, as the playing logic is decoupled from the service logic through the first interface control and the second interface control, when the first frame of picture (namely the target video picture) is played, the display of the service logic controlled by the second interface control can be ignored only by controlling the playing of the target video picture through the first interface control, so that the speed of playing the first frame of picture (namely the target video picture) can be improved.
In some embodiments, after setting a calling relationship, a calling mode, and a communication mode between a parent interface control and each child control (i.e., a first interface control and a second interface control), the first interface control may be called by the parent interface control to start the playing component to display the target video frame; and receiving the first message through the parent interface control.
For example, the first interface control is a video view and the parent interface control is a parent view. In one structural diagram of the page control frame shown in fig. 5b, the view provided by the page control frame can implement UI logical deletion, for example, updating layout information of UI elements on the playing interface, and performing visibility change switching on UI elements on the playing interface (for example, switching UI elements between visibility and invisibility). In order to make the video view concentrate on the logical control of the player, such as functions of play, pause, replay, stop, etc., in the parent view, the play logic of the video view is called in the form of an interface, and since the parent view inherits the component view provided by the page control framework (e.g., the dynamic framework Viola), the parent view has the capability of bidirectional communication of the dynamic framework, so that the interaction with the front-end platform in the play process can be realized. Moreover, the logic separation is realized through the form, and the expansion of subsequent services and the maintenance of the original services are facilitated.
Optionally, in some embodiments of the application, during the process of viewing the target video, the user may perform operations such as enlarging (for example, displaying in a full screen, enlarging the current playing interface according to a certain proportion), reducing (for example, exiting from the full screen, reducing the current playing interface according to a certain proportion), and the like on the playing interface of the target video. The width, height and size of a service view showing a specific service form in the existing first frame picture are often specified and written out in advance, and under the condition that a video is full-screen and a page rotates, the specified width and height are not applicable any more, so that all UI elements in a playing page after a target video is amplified are disordered. If the UI elements are not confused by intervening the presentation and layout of the initial service view in the full screen process, a large performance overhead is incurred. Therefore, in order to ensure that the adaptability between the playing interface and the display screen and the layout of the UI elements are not disordered when the operations (zooming in or zooming out) are performed, and the re-layout does not need to be implemented for each UI element, an object class and a modal component can be introduced into the page control frame. The abstract class is a parent class which is required by continuous extraction of common requirements, and is a statement of functions and attributes, which indicates that things like the abstract class should have the contents. Abstract classes include windows, etc. The Modal component is modified from the most initial Frame layout (Frame layout), mainly by displaying and hiding the service logic. The Modal component is associated with a second interface control (i.e., a business view).
The following is introduced from performing zoom-in display and zoom-out display on a playing interface of a target video, respectively:
firstly, amplifying and displaying a playing interface of a target video
A user may click an icon on a client, perform gesture operation, and trigger an amplification operation on a play interface of a target video, specifically, as shown in fig. 6a, an embodiment of the present application includes:
401. a second message is received.
And the second message is used for requesting the amplified display of the first playing interface of the target video.
402. Moving the first interface control from the parent interface control to the abstract class.
In some implementations, the moving the first interface control from the parent interface control to an abstract class includes:
removing the first interface control from the parent interface control, and recording an initial index and initial layout information of the first interface control in the parent interface control;
clearing the initial layout information arranged on the first interface control;
adding the first interface control to the abstract class.
For example, as shown in fig. 7a, fig. 7a is a view level structure diagram of a page control frame before a first playing interface is displayed in an enlarged manner, and fig. 7b is a structure level diagram of the page control frame after the first playing interface is displayed in an enlarged manner. The full-screen video (i.e., the first interface control) is extracted from the view hierarchy shown in fig. 7a, and the Index of the video in the parent view (i.e., the parent interface control) of fig. 7a and the initial layout information in the parent view of fig. 7a are recorded. The initial layout information set on the video view is cleared. The video view to be full screen is then added to the window, resulting in the structural hierarchy of the page control frame as shown in FIG. 7 b.
403. And updating the size of the first playing interface to the size setting information in the abstract class.
In some embodiments, the updating the size of the first playing interface to the size setting information in the abstract class includes:
setting new layout information for the first interface control;
setting the target size of the first playing interface according to the size setting information in the parent interface control;
and generating the second playing interface according to the new layout information and the target size.
For example, while adding a video view to a window, it is necessary to set new layout description information for this video view, and set both width and height to be consistent with a parent view whose parent view has become the size set in the window, which is as wide as the screen of the display screen, and thus the purpose of UI element adaptation has been achieved at this time.
404. And displaying the first playing interface by calling the abstract class.
In some embodiments, when the abstract class is called to display the first playing interface, service data in the first playing interface may also be hidden, for example, when the current first playing interface is displayed in a full screen, the service data may be hidden. Specifically, after receiving a second message, before the rotating the first playback interface, moving the second interface control from the parent interface control to the abstract class, and setting the second interface control to be invisible. When the second interface control is associated with the Modal component, the Modal component can be removed from the window, the Modal component is added back to the initial business view level, and meanwhile, the visibility of the business view is set to be invisible, so that the original business side can restore the width and height set and be hidden.
Therefore, when the full screen is displayed, the service logic is hidden, so that the service logic can be more conveniently and efficiently realized in the full screen scene by the service party, and the performance overhead is reduced.
405. And rotating the first playing interface to generate a second playing interface.
In some embodiments, the first playing interface can be rotated by rotating the Activity, so that the width and the height of the view layout are changed, and a better experience of landscape viewing is achieved, and meanwhile, the Activity can automatically save data in the rotating process to enable the view to be normally displayed before and after the full screen, so that the Activity self-rotating method is called in the process of amplifying and displaying the first playing interface. Before the video is full screen, the service performance of the service party is often imperceptible, for example, the first playing interface is a status bar before the full screen, and the navigation bar is visible.
As shown in fig. 6b, when a certain distance exists between the boundary of the first playing interface currently playing the target video and the boundary of the display screen, the scheme of the embodiment of the present application is adopted to amplify the first playing interface after receiving an amplification instruction of a user for the first playing interface, so as to obtain a second playing interface.
406. And displaying the second playing interface.
For example, for any enlarged scene, the view can be extracted from the original view level and added to the window, so as to achieve the purpose of not causing the confusion of UI elements and prepare for stretching the width and height in the next step. In the embodiment of the application, the video view added to the window uses the adaptive layout attribute to realize the idea of being as wide and high as the window or even as wide and high as the mobile phone screen, and here, the width and the height of the video view can also be manually set to achieve the purpose of arbitrarily amplifying the video.
For example, the initial size of the video view: the width is 400dp, the height is 500dp, the width and the height are respectively set to be 800dp and 1000dp after the video view is added to the window, and the video view is amplified at the moment.
(1) If the first playing picture is currently magnified by a certain scale and needs to be magnified again on the basis of the current magnification scale, the purpose of re-planning can be achieved by continuing the idea of the scheme. The flow is that firstly, the view is extracted from the original view level and added to the window, and then a larger width and height is set (if the video view is added to the window before amplification, the width and the height are directly set at the moment, without a process of dynamically adjusting the view level)
(2) If the first playing picture is currently reduced to a certain proportion, the first playing picture needs to be enlarged based on the current reduction, firstly, the hierarchical relationship of the view is adjusted, and then a larger width and height are set (the specific implementation idea can refer to the embodiment for enlarged display).
Therefore, when the full screen is entered, a process of extracting the video view from the original hierarchical structure and adding the video view to the window is adopted to realize UI self-adaptation. Finally, the Activity is rotated to change the width and the height of the whole view layout, so that the video view in front of the full screen is rotated to the full screen at an angle of 90 degrees, and better horizontal screen viewing experience is achieved. Because the front end side initiates a full-screen instruction, the solution idea provided by the scheme is automatically digested when the client side receives the instruction, excessive intervention on the front end side is not needed, and the cleanness, tidiness and maintainability of logic can be ensured.
Secondly, performing reduced display on the playing interface of the target video
A user may click an icon, perform a gesture operation, and trigger a zoom-out operation on a play interface of a target video in a client, specifically, as shown in fig. 8, an embodiment of the present application includes:
501. a third message is received.
And the third message is used for requesting the first playing interface of the target video to be displayed in a reduced mode.
In some embodiments, if the status bar and the navigation bar are hidden during the enlarged display process (e.g., full screen process), the business side business presentation logic needs to be restored (if any) at this time, and specifically, after receiving the third message and before rotating the first playing interface, the second interface control is moved from the abstract class to the initial level of the parent interface control, and the second interface control is set to be visible.
502. Moving the first interface control from the abstract class into the parent interface control.
And moving the first interface control from the abstract class to the father interface control, so that the initial level of the video view can be recovered. Since the video view is added to the window during the process of enlarging the playing interface of the target video, the video view is stripped from the window at this time, the current layout information of the video view is cleared, and the video view is added back to the initial parent view in combination with the previously stored initial position information.
503. And displaying the first playing interface by calling the first interface control.
504. And rotating the first playing interface to generate a second playing interface.
In some embodiments, the first playback interface may be rotated using the inverse of the activity call rotation.
505. And displaying the second playing interface.
In the embodiment of the application, when the playing interface is displayed in an amplifying mode, the front end hides the service logic, so that a service party can realize the service logic more conveniently and efficiently under an amplifying scene, and the performance overhead can be reduced.
For ease of understanding, the method for processing video pictures in the embodiment of the present application is described below with a tandem mode service, which is a video service in a viewpoint video of a communication client. And clicking a video channel in the viewpoint video of the communication client by the viewpoint user of the viewpoint video, and skipping to a corresponding playing interface by clicking a video card in the video channel. Wherein the connected mode is a main page where the watching point user consumes the video. The main interface comprises at least one combination of vertical video card stream, horizontal video card stream or horizontal and vertical mixed stream. The service interaction in the main interface is complex, functions of video pause, continuous playing, full screen, mute, progress bar dragging and the like are related to a player end of a watching point video, a page end behavior relates to sliding playing and playing of the next after the full screen, and the like are related to double click like, friend sharing, watching point sharing, comment and the like.
In the embodiment of the present application, the first interface control (i.e., the video view component) is implemented based on the page control frame Viola. As shown in fig. 9, a human comparison diagram of a connected mode native and a connected mode viola is shown, in the connected mode native, the workload of developing the client mainly relates to the server and the client, the development time spent on the server and the client is about 34 days, and 2 developers need to be invested at each end. After the scheme of the embodiment of the application is adopted, under the associated mode of viola, since the service logic is migrated to the front-end platform for development, developers at the client only need to focus on the implementation of the video component, and therefore, the version adopting the scheme is adopted. The client developer only needs to invest 42 days to complete, the viola page takes 27 days, and 5 developers are invested in the process, wherein the viola development (namely, native modification and business logic development) is completed at the client. Therefore, by adopting the scheme of the application, the development period can be effectively shortened.
In addition, the method is in the aspect of key data indexes of video playing. Compared with a video component realized by pure native, the video component realized by the scheme has more excellent performance. Taking the key index of average time consumption of playing the first frame of the video as an example, since the video view component implemented by the scheme strips the service coupling logic from the generation to the playing process, for example, service logics such as reporting and rendering are all migrated to the front-end platform to be implemented, the time for the user to see the first frame of the video is shorter. Fig. 10 is a schematic diagram comparing the average playing time of the first frame of picture in the prior art with that of the present solution. In fig. 10, the dotted line represents the average time (about 260ms) for the video view component implemented by the present solution to play the first frame. The solid line is the average elapsed time (approximately 380ms) for a purely native implementation of the video view component.
In this embodiment, the first frame data, the first frame picture, and the target playing picture may all be stored in the block chain. The blockchain is a novel application mode of computer technologies such as distributed data storage, point-to-point transmission, a consensus mechanism and an encryption algorithm. A block chain (Blockchain), which is essentially a decentralized database, is a series of data blocks associated by using a cryptographic method, and each data block contains information of a batch of network transactions, so as to verify the validity (anti-counterfeiting) of the information and generate a next block. The blockchain may include a blockchain underlying platform, a platform product services layer, and an application services layer.
The block chain underlying platform can comprise processing modules such as user management, basic service, intelligent contract and operation monitoring. The user management module is responsible for identity information management of all blockchain participants, and comprises public and private key generation maintenance (account management), key management, user real identity and blockchain address corresponding relation maintenance (authority management) and the like, and under the authorization condition, the user management module supervises and audits the transaction condition of certain real identities and provides rule configuration (wind control audit) of risk control; the basic service module is deployed on all block chain node equipment and used for verifying the validity of the service request, recording the service request to storage after consensus on the valid request is completed, for a new service request, the basic service firstly performs interface adaptation analysis and authentication processing (interface adaptation), then encrypts service information (consensus management) through a consensus algorithm, transmits the service information to a shared account (network communication) completely and consistently after encryption, and performs recording and storage; the intelligent contract module is responsible for registering and issuing contracts, triggering the contracts and executing the contracts, developers can define contract logics through a certain programming language, issue the contract logics to a block chain (contract registration), call keys or other event triggering and executing according to the logics of contract clauses, complete the contract logics and simultaneously provide the function of upgrading and canceling the contracts; the operation monitoring module is mainly responsible for deployment, configuration modification, contract setting, cloud adaptation in the product release process and visual output of real-time states in product operation, such as: alarm, monitoring network conditions, monitoring node equipment health status, and the like.
A video picture processing apparatus (which may also be referred to as a server) performing the method of processing a video picture in the embodiments of the present application may be a node in a blockchain system. The video frame processing apparatus in the embodiment of the present application may be a node in a blockchain system as shown in fig. 11.
Any technical feature mentioned in the embodiment corresponding to any one of fig. 1 to 10 is also applicable to the embodiment corresponding to fig. 12 to 16 in the embodiment of the present application, and the details of the subsequent similarities are not repeated.
In the above description, a method for processing video pictures in the embodiment of the present application is described, and a device (including a terminal and a front-end platform) for executing the video picture processing is described below.
Referring to fig. 12, a schematic structural diagram of a video frame processing apparatus 60 shown in fig. 12 is applicable to processing a first frame of a video played online. The video picture processing apparatus in the embodiment of the present application can implement the steps corresponding to the method for processing a video picture performed in the embodiment corresponding to any one of fig. 1 to 10. The functions implemented by the video image processing device 60 may be implemented by hardware, or may be implemented by hardware executing corresponding software. The hardware or software includes one or more modules corresponding to the above functions, which may be software and/or hardware. The video image processing apparatus 60 may include an input/output module 601, a processing module 602, and a display module 603, where the function of the input/output module 601 may refer to operations of obtaining, outputting, and the like executed in the embodiment corresponding to any one of fig. 1 to 10, and the function of the processing module 602 may refer to operations of associating, generating, and the like executed in the embodiment corresponding to any one of fig. 1 to 10, which are not described herein again. For example, the processing module may be used to control input and output operations of the input and output module, and to control display operations of the display module.
In some embodiments, the input/output module 601 may be configured to obtain a first frame of picture of a target video, where the first frame of picture includes video playing data and rendered service data;
the processing module 602 may be configured to associate the video playing data acquired by the input and output module 601 with a first interface control and associate the service data with a second interface control; generating a target video picture from the first frame picture through the first interface control and the second interface control, wherein the first interface control is used for controlling a playing assembly to play playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic.
In some embodiments, after the processing module 602 generates the first frame picture into the target video picture, the processing module is further configured to:
displaying video playing data in the target video picture through the first interface control, and displaying service data in the target video picture through the second interface control;
receiving a first message of a user through a search input and output module 601, where the first message is used to request to play the target video;
and starting the playing component through the first interface control so as to control the playing component to play the target video picture.
In some embodiments, the apparatus 60 is implemented based on a page control frame, where the first interface control and the second interface control belong to a same parent interface control, and before the processing module 602 generates the target video frame from the first frame, the processing module is further configured to:
associating the first interface control with the play component;
setting a calling mode and a communication mode between the parent interface control and the first interface control;
correspondingly, the processing module 602 is specifically configured to:
calling the first interface control through the parent interface control to start the playing component to display the target video picture;
the input/output module 601 receives the first message through the parent interface control.
In some embodiments, the page control frame further includes an abstract class, and the processing module 602 is further configured to:
receiving a second message through the input/output module 601, where the second message is used to request for performing amplification display on a first playing interface of the target video;
moving the first interface control from the parent interface control to the abstract class;
updating the size of the first playing interface to the size setting information in the abstract class;
displaying the first playing interface by calling the abstract class;
rotating the first playing interface to generate a second playing interface;
the second playing interface is displayed through the display module 603.
In some embodiments, the processing module 602 is specifically configured to:
removing the first interface control from the parent interface control, and recording an initial index and initial layout information of the first interface control in the parent interface control;
clearing the initial layout information arranged on the first interface control;
adding the first interface control to the abstract class.
In one possible design, the processing module is specifically configured to 602:
setting new layout information for the first interface control;
setting the target size of the first playing interface according to the size setting information in the parent interface control;
and generating the second playing interface according to the new layout information and the target size.
In some embodiments, the processing module 602, after the search input output module 601 receives the second message and before the first playing interface is rotated, is further configured to:
moving the second interface control from the parent interface control to the abstract class;
setting the second interface control to be invisible.
In some embodiments, the page control frame further includes an abstract class, and the search processing module 602 is further configured to:
receiving a third message through the input/output module 601, where the third message is used to request a reduced display of the first playing interface of the target video;
moving the first interface control from the abstract class into the parent interface control;
displaying the first playing interface by calling the first interface control;
rotating the first playing interface to generate a second playing interface;
the second playing interface is displayed through the display module 603.
In some embodiments, the processing module 602, after the input/output module 601 receives the third message and before the first playing interface is rotated, is further configured to:
moving the second interface control from the abstract class to an initial level of the parent interface control;
setting the second interface control to be visible.
In some embodiments, the target video picture is stored on a blockchain node.
In the embodiment of the application, since the service data and the video playing data in the first frame of picture acquired from the front-end platform are logically separated and the service data is in the rendering state, a part of rendering operation which should be executed on the client is transferred, and therefore the performance of the client can be improved. In addition, because the client side uses separate interface controls for video playing data and service data respectively, the client side extracts redundant codes related to the service, on one hand, the first interface control only focuses on playing logic, and avoids focusing on other unrelated UI operations, namely, the UI related operations and the operating logic of the player are separated. On the other hand, the terminal side does not need to execute UI-related rendering instructions. Further reducing performance loss and accelerating the generation speed and the play-starting speed of the video card.
Referring to fig. 13, the embodiment of the present application further provides a video image processing apparatus 70, where the video image processing apparatus 70 may be a front-end platform, and may also be software deployed on the front-end platform, which is not limited in the embodiment of the present application. The video picture processing apparatus 70 includes:
a processing module 701, configured to obtain first frame data from a target video to be processed; acquiring video playing data and service data from the first frame data;
a rendering module 702, configured to render the service data acquired by the processing module 701;
the processing module 701 is further configured to generate a first frame picture according to the video playing data and the rendered service data;
the input/output module 703 is configured to send a target video including the first frame of picture to a terminal.
In some embodiments, the processing module 701 is specifically configured to:
splitting the service data into at least one class of data;
setting UI (user interface) labels for various types of data according to the types of the data;
generating at least one video profile from the data of the at least one class;
and rendering the at least one video introduction respectively to obtain a service UI element, wherein the service UI element is used for displaying service forms and service logics of services.
In some embodiments, the first frame picture is stored on a blockchain node.
In this embodiment, the processing module 701 disassembles the service data and the data related to video playing. And setting the data related to video playing to the first interface control to ensure that the first interface control is only concentrated in playing logic, so that the generating speed of the complex video card is increased. In addition, the service data is decomposed into single class data again, and the single class data is respectively set to the UI labels of the front end. The purpose of disassembling and rendering is to enable the first interface control at the terminal side to pay more attention to the playing logic of the playing itself without processing redundant service codes, so that the generation speed and the playing speed of the card are increased
The video picture processing apparatus (including the apparatuses shown in fig. 12 and 13) in the embodiment of the present application is described above from the perspective of the modular functional entity, and the terminal and the server that execute the method of processing the video picture in the embodiment of the present application are described below from the perspective of hardware processing. It should be noted that, in the embodiment shown in fig. 12 of this application, the entity device corresponding to the input/output module 603 may be an input/output unit, a transceiver, a radio frequency circuit, a communication module, an output interface, and the like, and the entity device corresponding to the processing module 601 may be a processor. For example, the apparatus 60 shown in fig. 12 may have a structure as shown in fig. 14, when the apparatus 60 shown in fig. 12 has a structure as shown in fig. 14, the processor and the input/output unit in fig. 14 can implement the same or similar functions of the processing module 601 and the input/output module 602 provided in the foregoing embodiment of the apparatus corresponding to the apparatus, and the memory in fig. 14 stores computer programs that the processor needs to call when executing the above method for processing video pictures.
For another example, the apparatus 70 shown in fig. 13 may have a structure as shown in fig. 14, when the apparatus 70 shown in fig. 13 has the structure as shown in fig. 14, the processor and the input/output unit in fig. 14 can implement the same or similar functions of the processing module 701, the rendering module 702, and the input/output module 703 provided in the embodiment of the apparatus corresponding to the apparatus, and the memory in fig. 14 stores a computer program that needs to be called when the processor executes the method for processing a video screen.
Fig. 15 is a mobile phone according to an embodiment of the present application, and another terminal is further provided according to the embodiment of the present application, as shown in fig. 15, for convenience of description, only a portion related to the embodiment of the present application is shown, and details of the specific technology are not disclosed, please refer to a method portion according to the embodiment of the present application. The terminal device may be any terminal device including a mobile phone, a tablet computer, a Personal Digital Assistant (PDA, for short, the whole english is: Personal Digital Assistant), a Point of sale terminal (POS, for short, the whole english is: Point of Sales), a vehicle-mounted computer, etc., taking the terminal as the mobile phone as an example:
fig. 15 is a block diagram illustrating a partial structure of a mobile phone related to a terminal device provided in an embodiment of the present application. Referring to fig. 15, the cellular phone includes: radio Frequency (RF) circuit 1515, memory 1520, input unit 1530, display unit 1540, sensor 1550, audio circuit 1560, wireless fidelity (WiFi) module 1570, processor 1580, and power 1590. Those skilled in the art will appreciate that the handset configuration shown in fig. 15 is not intended to be limiting and may include more or fewer components than those shown, or some components may be combined, or a different arrangement of components.
The following describes each component of the mobile phone in detail with reference to fig. 15:
the RF circuit 1515 may be configured to receive and transmit signals during information transmission and reception or during a call, and in particular, receive downlink information of a base station and then process the received downlink information to the processor 1580; in addition, the data for designing uplink is transmitted to the base station. In general, the RF circuitry 1515 may include, but is not limited to, an antenna, at least one Amplifier, a transceiver, a coupler, a Low Noise Amplifier (LNA), a duplexer, and the like. In addition, the RF circuitry 1515 may also communicate with networks and other devices via wireless communications. The wireless communication may use any communication standard or protocol, including but not limited to Global System for Mobile communications (GSM), General Packet Radio Service (GPRS), Code Division Multiple Access (CDMA), Wideband Code Division Multiple Access (WCDMA), Long Term Evolution (LTE), e-mail), Short Message Service (SMS), etc.
The memory 1520 may be used to store software programs and modules, and the processor 1580 performs various functional applications and data processing of the cellular phone by operating the software programs and modules stored in the memory 1520. The memory 1520 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like; the storage data area may store data (such as audio data, a phonebook, etc.) created according to the use of the cellular phone, and the like. Further, the memory 1520 may include high-speed random access memory and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other volatile solid-state storage device.
The input unit 1530 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the cellular phone. Specifically, the input unit 1530 may include a touch panel 1531 and other input devices 1532. The touch panel 1531, also referred to as a touch screen, can collect touch operations of a user (e.g., operations of the user on or near the touch panel 1531 using any suitable object or accessory such as a finger or a stylus) and drive corresponding connection devices according to a preset program. Alternatively, the touch panel 1531 may include two parts, a touch detection device and a touch controller. The touch detection device detects the touch direction of a user, detects a signal brought by touch operation and transmits the signal to the touch controller; the touch controller receives touch information from the touch sensing device, converts the touch information into touch point coordinates, and sends the touch point coordinates to the processor 1580, and can receive and execute commands sent by the processor 1580. In addition, the touch panel 1531 may be implemented by various types such as a resistive type, a capacitive type, an infrared ray, and a surface acoustic wave. The input unit 1530 may include other input devices 1532 in addition to the touch panel 1531. In particular, other input devices 1532 may include, but are not limited to, one or more of a physical keyboard, function keys (such as volume control keys, switch keys, etc.), a trackball, a mouse, a joystick, and the like.
The display unit 1540 may be used to display information input by the user or information provided to the user and various menus of the mobile phone. The Display unit 1540 may include a Display panel 1541, and optionally, the Display panel 1541 may be configured by using a Liquid Crystal Display (LCD), an Organic Light-Emitting Diode (OLED), or the like. Further, the touch panel 1531 may cover the display panel 1541, and when the touch panel 1531 detects a touch operation on or near the touch panel 1531, the touch operation is transmitted to the processor 1580 to determine the type of the touch event, and then the processor 1580 provides a corresponding visual output on the display panel 1541 according to the type of the touch event. Although in fig. 15, the touch panel 1531 and the display panel 1541 are two separate components to implement the input and output functions of the mobile phone, in some embodiments, the touch panel 1531 and the display panel 1541 may be integrated to implement the input and output functions of the mobile phone.
The handset can also include at least one sensor 1550, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that adjusts the brightness of the display panel 1541 according to the brightness of ambient light and a proximity sensor that turns off the display panel 1541 and/or the backlight when the mobile phone is moved to the ear. As one of the motion sensors, the accelerometer sensor can detect the magnitude of acceleration in each direction (generally, three axes), can detect the magnitude and direction of gravity when stationary, and can be used for applications of recognizing the posture of a mobile phone (such as horizontal and vertical screen switching, related games, magnetometer posture calibration), vibration recognition related functions (such as pedometer and tapping), and the like; as for other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, and an infrared sensor, which can be configured on the mobile phone, further description is omitted here.
Audio circuitry 1560, speaker 1561, and microphone 1562 may provide an audio interface between a user and a cell phone. The audio circuit 1560 may transmit the electrical signal converted from the received audio data to the speaker 1561, and convert the electrical signal into an audio signal by the speaker 1561 and output the audio signal; on the other hand, the microphone 1562 converts collected sound signals into electrical signals, which are received by the audio circuit 1560 and converted into audio data, which are processed by the output processor 1580 and then passed through the RF circuit 1515 to be sent to, for example, another cellular phone, or output to the memory 1520 for further processing.
Wi-Fi belongs to short-distance wireless transmission technology, and a mobile phone can help a user to receive and send e-mails, browse webpages, access streaming media and the like through a Wi-Fi module 1570, and provides wireless broadband internet access for the user. While fig. 15 shows W-iFi block 1570, it is to be understood that it is not an essential component of a cellular phone and may be omitted as desired without changing the nature of the application.
The processor 1580 is a control center of the mobile phone, connects various parts of the entire mobile phone by using various interfaces and lines, and performs various functions of the mobile phone and processes data by operating or executing software programs and/or modules stored in the memory 1520 and calling data stored in the memory 1520, thereby integrally monitoring the mobile phone. Optionally, the processor 1580 may include one or more processing units; preferably, the processor 1580 may integrate an application processor, which mainly handles operating systems, user interfaces, application programs, and the like, and a modem processor, which mainly handles wireless communications. It is to be appreciated that the modem processor may not be integrated into the processor 1580.
The handset also includes a power supply 1590 (e.g., a battery) for powering the various components, which may preferably be logically coupled to the processor 1580 via a power management system to manage charging, discharging, and power consumption management functions via the power management system.
Although not shown, the mobile phone may further include a camera, a bluetooth module, etc., which are not described herein.
In the embodiment of the present application, the processor 1580 included in the mobile phone further has a function of controlling the execution of the method flow executed by the apparatus 60 shown in fig. 12.
Fig. 16 is a schematic diagram of a server 820, which may have a relatively large difference due to different configurations or performances, and includes one or more Central Processing Units (CPUs) 822 (e.g., one or more processors) and a memory 832, and one or more storage media 830 (e.g., one or more mass storage devices) for storing applications 842 or data 844. Memory 832 and storage medium 830 may be, among other things, transient or persistent storage. The program stored in the storage medium 830 may include one or more modules (not shown), each of which may include a series of instruction operations for the server. Still further, a central processor 822 may be provided in communication with the storage medium 830 for executing a series of instruction operations in the storage medium 830 on the server 820.
The Server 820 may also include one or more power supplies 826, one or more wired or wireless network interfaces 850, one or more input-output interfaces 858, and/or one or more operating systems 841, such as windows Server, Mac OS X, Unix, Linux, FreeBSD, etc.
The steps performed by the server in the above-described embodiment may be based on the structure of the server 820 shown in fig. 16. The steps performed by the apparatus 60 shown in fig. 16 in the above-described embodiment may be based on the server structure shown in fig. 16, for example. For example, the processor 822, by calling instructions in the memory 832, performs the following operations:
acquiring a first frame of picture of a target video, wherein the first frame of picture comprises video playing data and rendered service data;
associating the video playing data acquired by the input and output module to a first interface control and associating the service data to a second interface control; generating a target video picture from the first frame picture through the first interface control and the second interface control, wherein the first interface control is used for controlling a playing assembly to play playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic.
In the foregoing embodiments, the descriptions of the respective embodiments have respective emphasis, and for parts that are not described in detail in a certain embodiment, reference may be made to related descriptions of other embodiments.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working processes of the system, the apparatus and the module described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the several embodiments provided in the embodiments of the present application, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. For example, the above-described apparatus embodiments are merely illustrative, and for example, the division of the modules is merely a logical division, and in actual implementation, there may be other divisions, for example, multiple modules or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or modules, and may be in an electrical, mechanical or other form.
The modules described as separate parts may or may not be physically separate, and parts displayed as modules may or may not be physical modules, may be located in one place, or may be distributed on a plurality of network modules. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment.
In addition, functional modules in the embodiments of the present application may be integrated into one processing module, or each module may exist alone physically, or two or more modules are integrated into one module. The integrated module can be realized in a hardware mode, and can also be realized in a software functional module mode. The integrated module, if implemented in the form of a software functional module and sold or used as a stand-alone product, may be stored in a computer readable storage medium.
In the above embodiments, the implementation may be wholly or partially realized by software, hardware, firmware, or any combination thereof. When implemented in software, may be implemented in whole or in part in the form of a computer program product.
The computer program product includes one or more computer instructions. The procedures or functions described in accordance with the embodiments of the present application are generated in whole or in part when the computer program is loaded and executed on a computer. The computer may be a general purpose computer, a special purpose computer, a network of computers, or other programmable device. The computer instructions may be stored in a computer readable storage medium or transmitted from one computer readable storage medium to another, for example, from one website site, computer, server, or data center to another website site, computer, server, or data center via wired (e.g., coaxial cable, fiber optic, Digital Subscriber Line (DSL)) or wireless (e.g., infrared, wireless, microwave, etc.). The computer-readable storage medium can be any available medium that a computer can store or a data storage device, such as a server, a data center, etc., that is integrated with one or more available media. The usable medium may be a magnetic medium (e.g., floppy Disk, hard Disk, magnetic tape), an optical medium (e.g., DVD), or a semiconductor medium (e.g., Solid State Disk (SSD)), among others.
The technical solutions provided by the embodiments of the present application are introduced in detail, and the principles and implementations of the embodiments of the present application are explained by applying specific examples in the embodiments of the present application, and the descriptions of the embodiments are only used to help understanding the method and core ideas of the embodiments of the present application; meanwhile, for a person skilled in the art, according to the idea of the embodiment of the present application, there may be a change in the specific implementation and application scope, and in summary, the content of the present specification should not be construed as a limitation to the embodiment of the present application.

Claims (9)

1. A method for processing video pictures, the method comprising:
associating a first interface control with a play component;
setting a calling mode and a communication mode between a parent interface control and a first interface control; wherein the parent interface control comprises a first interface control and a second interface control;
acquiring a first frame of picture of a target video, wherein the first frame of picture comprises video playing data and rendered service data;
associating the video playing data to a first interface control and the business data to a second interface control; the first interface control is used for controlling a playing assembly to play the playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic;
generating a target video picture from the first frame picture through the first interface control and the second interface control;
displaying video playing data in the target video picture through the first interface control, and displaying service data in the target video picture through the second interface control;
receiving a first message of a user, wherein the first message is used for requesting to play the target video;
starting the playing component through the first interface control to control the playing component to play the target video picture;
wherein the method further comprises:
receiving a second message, wherein the second message is used for requesting to amplify and display a first playing interface of the target video;
moving the first interface control from the parent interface control to an abstract class of a page control frame;
updating the size of the first playing interface to the size setting information in the abstract class;
displaying the first playing interface by calling the abstract class;
rotating the first playing interface to generate a second playing interface;
and displaying the second playing interface.
2. The method of claim 1, wherein the activating the playback component via the first interface control to control the playback component to play back the target video frame comprises:
calling the first interface control through the parent interface control to start the playing component to display the target video picture;
receiving the first message through the parent interface control.
3. The method of claim 1, wherein moving the first interface control from the parent interface control to an abstract class comprises:
removing the first interface control from the parent interface control, and recording an initial index and initial layout information of the first interface control in the parent interface control;
clearing the initial layout information arranged on the first interface control;
adding the first interface control to the abstract class.
4. The method according to claim 1 or 3, wherein the updating the size of the first playing interface to the size setting information in the abstract class comprises:
setting new layout information for the first interface control;
setting the target size of the first playing interface according to the size setting information in the parent interface control;
and generating the second playing interface according to the new layout information and the target size.
5. The method of claim 4, wherein after receiving the second message and before rotating the first playback interface, the method further comprises:
moving the second interface control from the parent interface control to the abstract class;
setting the second interface control to be invisible.
6. The method of claim 1, wherein the page control frame further comprises an abstract class, the method further comprising:
receiving a third message, wherein the third message is used for requesting the first playing interface of the target video to be displayed in a reduced mode;
moving the first interface control from the abstract class into an initial hierarchy of the parent interface control;
displaying the first playing interface by calling the first interface control;
rotating the first playing interface to generate a second playing interface;
and displaying the second playing interface.
7. The method of claim 6, wherein after receiving the third message and before rotating the first playback interface, the method further comprises:
moving the second interface control from the abstract class to an initial level of the parent interface control;
setting the second interface control to be visible.
8. The method of claim 1, wherein the target video picture is stored on a blockchain node.
9. A video picture processing apparatus, characterized in that the apparatus comprises:
the input and output module is used for associating the first interface control with the playing component; setting a calling mode and a communication mode between a parent interface control and a first interface control; wherein the parent interface control comprises a first interface control and a second interface control; acquiring a first frame of picture of a target video, wherein the first frame of picture comprises video playing data and rendered service data;
the processing module is used for associating the video playing data to a first interface control and associating the service data to a second interface control; the first interface control is used for controlling a playing assembly to play the playing logic of the video playing data; the second interface control is used for controlling the display of the business data on the user interface and controlling business logic; generating a target video picture from the first frame picture through the first interface control and the second interface control; displaying video playing data in the target video picture through the first interface control, and displaying service data in the target video picture through the second interface control; receiving a first message of a user, wherein the first message is used for requesting to play the target video; starting the playing component through the first interface control to control the playing component to play the target video picture;
wherein the processing module is further configured to: receiving a second message, wherein the second message is used for requesting to amplify and display a first playing interface of the target video; moving the first interface control from the parent interface control to an abstract class of a page control frame; updating the size of the first playing interface to the size setting information in the abstract class; displaying the first playing interface by calling the abstract class; rotating the first playing interface to generate a second playing interface; and displaying the second playing interface.
CN202010245048.4A 2020-03-31 2020-03-31 Method for processing video pictures, related device and storage medium Active CN111432265B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010245048.4A CN111432265B (en) 2020-03-31 2020-03-31 Method for processing video pictures, related device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010245048.4A CN111432265B (en) 2020-03-31 2020-03-31 Method for processing video pictures, related device and storage medium

Publications (2)

Publication Number Publication Date
CN111432265A CN111432265A (en) 2020-07-17
CN111432265B true CN111432265B (en) 2021-08-31

Family

ID=71557326

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010245048.4A Active CN111432265B (en) 2020-03-31 2020-03-31 Method for processing video pictures, related device and storage medium

Country Status (1)

Country Link
CN (1) CN111432265B (en)

Families Citing this family (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111736744B (en) * 2020-07-22 2020-11-24 成都新希望金融信息有限公司 Monitoring early warning calculation subsystem based on DSL
CN111935532B (en) * 2020-08-14 2024-03-01 腾讯科技(深圳)有限公司 Video interaction method and device, electronic equipment and storage medium
CN112492332A (en) * 2020-11-13 2021-03-12 北京达佳互联信息技术有限公司 Data display control method and device, electronic equipment and storage medium
CN112770168B (en) * 2020-12-23 2023-10-17 广州虎牙科技有限公司 Video playing method, related device and equipment
CN113365150B (en) * 2021-06-04 2023-02-07 上海哔哩哔哩科技有限公司 Video processing method and video processing device
CN113596521A (en) * 2021-07-29 2021-11-02 武汉中科通达高新技术股份有限公司 Video playing control method and device, electronic equipment and storage medium
CN114138392A (en) * 2021-12-04 2022-03-04 杭州安恒信息技术股份有限公司 Page display method, system, electronic device and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221953A (en) * 2010-04-14 2011-10-19 上海中标软件有限公司 Realization method for transparent user interface video player and player thereof
CN104737095A (en) * 2012-07-24 2015-06-24 美国港口集团公司 Systems and methods involving features of terminal operation including user interface and/or other features
CN105630507A (en) * 2015-12-29 2016-06-01 Tcl集团股份有限公司 Method and device for drawing WebView control interface
CN106507164A (en) * 2016-11-14 2017-03-15 四川长虹电器股份有限公司 Audio-visual refrigerator based on HTML5
CN107608594A (en) * 2017-09-08 2018-01-19 维沃移动通信有限公司 A kind of display methods and mobile terminal of more applications
CN108235086A (en) * 2017-12-18 2018-06-29 广州华多网络科技有限公司 Video playing control method, device and corresponding terminal
CN109634608A (en) * 2018-12-17 2019-04-16 江苏满运软件科技有限公司 Interface dynamic generation method, system, equipment and medium
WO2019191082A3 (en) * 2018-03-27 2019-11-14 Skreens Entertainment Technologies, Inc. Systems, methods, apparatus and machine learning for the combination and display of heterogeneous sources
CN110806856A (en) * 2019-10-30 2020-02-18 亚信科技(中国)有限公司 Data loading method and device and electronic equipment

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102567030B (en) * 2012-01-06 2015-09-30 深圳市酷开网络科技有限公司 Television user interface implementation method and system
CN105511725A (en) * 2015-12-09 2016-04-20 网易(杭州)网络有限公司 Method and device for displaying controls in interface
CN107038112B (en) * 2016-10-13 2020-09-01 腾讯科技(北京)有限公司 Application interface debugging method and device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102221953A (en) * 2010-04-14 2011-10-19 上海中标软件有限公司 Realization method for transparent user interface video player and player thereof
CN104737095A (en) * 2012-07-24 2015-06-24 美国港口集团公司 Systems and methods involving features of terminal operation including user interface and/or other features
CN105630507A (en) * 2015-12-29 2016-06-01 Tcl集团股份有限公司 Method and device for drawing WebView control interface
CN106507164A (en) * 2016-11-14 2017-03-15 四川长虹电器股份有限公司 Audio-visual refrigerator based on HTML5
CN107608594A (en) * 2017-09-08 2018-01-19 维沃移动通信有限公司 A kind of display methods and mobile terminal of more applications
CN108235086A (en) * 2017-12-18 2018-06-29 广州华多网络科技有限公司 Video playing control method, device and corresponding terminal
WO2019191082A3 (en) * 2018-03-27 2019-11-14 Skreens Entertainment Technologies, Inc. Systems, methods, apparatus and machine learning for the combination and display of heterogeneous sources
CN109634608A (en) * 2018-12-17 2019-04-16 江苏满运软件科技有限公司 Interface dynamic generation method, system, equipment and medium
CN110806856A (en) * 2019-10-30 2020-02-18 亚信科技(中国)有限公司 Data loading method and device and electronic equipment

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于QtWebkit的浏览器视频插件的设计与实现;李迪等;《计算机技术与发展》;20120228;全文 *

Also Published As

Publication number Publication date
CN111432265A (en) 2020-07-17

Similar Documents

Publication Publication Date Title
CN111432265B (en) Method for processing video pictures, related device and storage medium
US20200278949A1 (en) Method and apparatus for viewing previously used file, and terminal
US10768881B2 (en) Multi-screen interaction method and system in augmented reality scene
US10673790B2 (en) Method and terminal for displaying instant messaging message
CN110933511B (en) Video sharing method, electronic device and medium
WO2022017107A1 (en) Information processing method and apparatus, computer device and storage medium
US20190155488A1 (en) Buddy list presentation control method and system, and computer storage medium
CN111274777B (en) Thinking guide display method and electronic equipment
US20160133006A1 (en) Video processing method and apparatus
CN110825302A (en) Method for responding operation track and operation track responding device
WO2021175143A1 (en) Picture acquisition method and electronic device
WO2021073579A1 (en) Method for capturing scrolling screenshot and terminal device
CN112068762A (en) Interface display method, device, equipment and medium of application program
CN109871358A (en) A kind of management method and terminal device
CN108920069A (en) A kind of touch operation method, device, mobile terminal and storage medium
CN107908330A (en) The management method and mobile terminal of application icon
CN111444540A (en) Display method, electronic device, and medium
CN112954046A (en) Information sending method, information sending device and electronic equipment
CN114422461A (en) Message reference method and device
CN109491632A (en) A kind of resource sharing method and terminal
CN108536388A (en) split screen processing method, device, storage medium and electronic equipment
KR20230061519A (en) Screen capture methods, devices and electronics
CN112099714B (en) Screenshot method and device, electronic equipment and readable storage medium
CN109889661A (en) A kind of interface display control method and mobile terminal
WO2020238477A1 (en) Editing method and mobile terminal

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40026298

Country of ref document: HK

SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant