CN111954067B - Method for improving video rendering efficiency and user interaction fluency - Google Patents

Method for improving video rendering efficiency and user interaction fluency Download PDF

Info

Publication number
CN111954067B
CN111954067B CN202010899491.3A CN202010899491A CN111954067B CN 111954067 B CN111954067 B CN 111954067B CN 202010899491 A CN202010899491 A CN 202010899491A CN 111954067 B CN111954067 B CN 111954067B
Authority
CN
China
Prior art keywords
rendering
event
video
rendering module
module
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010899491.3A
Other languages
Chinese (zh)
Other versions
CN111954067A (en
Inventor
杨净
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Shidong Technology Co ltd
Original Assignee
Hangzhou Shidong Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Shidong Technology Co ltd filed Critical Hangzhou Shidong Technology Co ltd
Priority to CN202010899491.3A priority Critical patent/CN111954067B/en
Publication of CN111954067A publication Critical patent/CN111954067A/en
Application granted granted Critical
Publication of CN111954067B publication Critical patent/CN111954067B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/45Management operations performed by the client for facilitating the reception of or the interaction with the content or administrating data related to the end-user or to the client device itself, e.g. learning user preferences for recommending movies, resolving scheduling conflicts
    • H04N21/462Content or additional data management, e.g. creating a master electronic program guide from data received from the Internet and a Head-end, controlling the complexity of a video stream by scaling the resolution or bit-rate based on the client capabilities

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • User Interface Of Digital Computer (AREA)
  • Controls And Circuits For Display Device (AREA)

Abstract

The invention discloses a method for improving video rendering efficiency and user interaction fluency, which comprises the following steps; step 1, initializing a video rendering module, and enabling the video rendering module to enter a dormant state; step 2, awakening the rendering module by an external trigger event; step 3, the rendering module processes the events, and three events exist; step 4, entering a vertical synchronization state, if the current timestamp is more than 1/60 second from the last rendering start time, directly proceeding to the next step, otherwise, if the current timestamp is a data event, performing frame loss operation, if the current timestamp is a UI event, entering a timing queue, and rendering at a time point which is more than or equal to the vertical synchronization interval; and 5, entering a rendering state, binding the current frame with the graph api according to the default parameters, and then calling the graph api for rendering. The invention achieves more excellent balance between the video playing fluency and the rendering efficiency, and can achieve high-efficiency video rendering.

Description

Method for improving video rendering efficiency and user interaction fluency
Technical Field
The invention relates to the technical field of computer software, in particular to a method for improving video rendering efficiency and user interaction fluency.
Background
Video rendering is a technology for displaying decoded original video data, and there are two common video rendering processes:
one is to render the image data directly, because the refresh rate of rendering is related to the frame rate of the video, the method has the disadvantage that if a low frame rate (less than 60 frames per second) video is encountered, a pause is generated when a UI operation (zoom-in, zoom-out, rotation, animation, etc.) is performed;
the other method is to poll to acquire image data for rendering at a fixed frequency, and although the method can make the UI operation smooth, the method also has the defect that when the UI operation is not triggered, the polling frequency is often greater than the frame rate of the video, so that unnecessary performance waste is caused.
Disclosure of Invention
The invention aims to provide a method for improving video rendering efficiency and user interaction fluency, so as to solve the problems of untimely UI operation response and performance waste in the video rendering process proposed in the background technology.
In order to achieve the purpose, the invention provides the following technical scheme: a method for improving video rendering efficiency and user interaction fluency comprises the following steps;
step 1, initializing a video rendering module, and enabling the video rendering module to enter a dormant state;
step 2, awakening the rendering module by an external trigger event;
step 3, the rendering module processes events, and three events exist;
(1) Data event processing: at this time, the external module requests to render video data, and the rendering module takes the video raw data such as I420 or NV12 format;
(2) And UI event processing: at this moment, UI operation representing the change of the video display effect is triggered, and the rendering module can adjust internal parameters such as matrixes, vertexes and numerical values according to the corresponding operation;
(3) Error or exit event handling: at this time, when the rendering module encounters an unrecoverable error or an external module triggers an exit operation, the rendering module performs resource recovery and exits the main loop;
step 4, entering a vertical synchronization state, if the current timestamp is more than 1/60 second from the last rendering start time, directly proceeding to the next step, otherwise, if the current timestamp is a data event, performing frame loss operation, if the current timestamp is a UI event, entering a timing queue, and rendering at a time point which is more than or equal to the vertical synchronization interval;
step 5, entering a rendering state, binding the current frame with the graph api according to the default parameters, and then calling the graph api for rendering;
and 6, checking the current event queue, jumping to the step 3 if other events exist, otherwise, returning to the step 1, and enabling the video rendering module to enter a sleep state.
Preferably, in step 3, the data event is a rendering event triggered by raw data generated by video decoding.
Preferably, in step 3, the UI event is an event triggered by a change in image effect of a UI operation such as zoom-in, zoom-out, rotation, and the like.
The method for improving the video rendering efficiency and the user interaction fluency, provided by the invention, has the beneficial effects that:
1. according to the invention, the video playing fluency and the rendering efficiency are well balanced, and high-efficiency video rendering can be achieved;
2. the invention can achieve video rendering with less performance waste and smooth response to UI operation.
Drawings
Fig. 1 is a schematic structural diagram of a video rendering module according to the present invention.
Fig. 2 is a schematic structural diagram of the use state of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Referring to fig. 1-2, the present invention provides a technical solution: a method for improving video rendering efficiency and user interaction fluency comprises the following steps; step 1, initializing a video rendering module, and enabling the video rendering module to enter a dormant state; step 2, awakening the rendering module by an external trigger event; step 3, the rendering module processes the events, and three events exist; (1) data event processing: at this time, the external module requests to render video data, and the rendering module takes the video raw data such as I420 or NV12 format; data events rendering events triggered by raw data generated by video decoding; (2) UI event processing: at this moment, UI operation representing the change of the video display effect is triggered, and the rendering module can adjust internal parameters such as matrixes, vertexes and numerical values according to the corresponding operation; the UI event is an event triggered by a change in image effect by a UI operation such as enlargement, reduction, rotation, or the like; (3) error or exit event handling: at this time, when the rendering module encounters an unrecoverable error or an external module triggers an exit operation, the rendering module performs resource recovery and exits the main loop; step 4, entering a vertical synchronization state, if the current timestamp is more than 1/60 second from the last rendering start time, directly proceeding to the next step, otherwise, if the current timestamp is a data event, performing frame loss operation, if the current timestamp is a UI event, entering a timing queue, and rendering at a time point which is more than or equal to the vertical synchronization interval; step 5, entering a rendering state, binding the current frame with the graph api according to the default parameters, and then calling the graph api for rendering; and 6, checking the current event queue, jumping to the step 3 if other events exist, otherwise returning to the step 1, and enabling the video rendering module to enter a sleep state.
In the embodiment, events are divided into two types, one type is a rendering event triggered by original data generated by video decoding, and the other type is an event triggered by image effect changes such as amplification, reduction, rotation and the like of UI (user interface), and the other type is a data event for short, the rendering module enters a dormant state without event triggering, so that ineffective rendering is not performed, thereby avoiding performance waste, when the original data decoded by a video decoder needs to be displayed on a screen, the data event is triggered by calling an interface provided by the rendering module, when the triggering time of the last data event is less than the vertical synchronization time (60 minutes and one second), the frame of video is discarded and not rendered, one type is not sensitive to picture changes of more than 60 frames, so that performance waste can be reduced, and the other type is not subjected to picture tearing, and when the triggering time of the last data event is greater than the vertical synchronization time, rendering is performed.
Similar to the data event, the UI event is triggered by the UI operations such as clicking, kneading, touch dragging by the user, and the like, to immediately wake up the rendering module, change the rendering parameters, and then perform vertical synchronization and rendering.
Although embodiments of the present invention have been shown and described, it will be appreciated by those skilled in the art that changes, modifications, substitutions and alterations can be made in these embodiments without departing from the principles and spirit of the invention, the scope of which is defined in the appended claims and their equivalents.

Claims (1)

1. A method for improving video rendering efficiency and user interaction fluency is characterized in that: comprises the following steps; step 1, initializing a video rendering module, and enabling the video rendering module to enter a dormant state; step 2, awakening the rendering module by an external trigger event; step 3, the rendering module processes the events, and three events exist; (1) data event processing: at this time, the external module requests to render the video data, and the rendering module takes the video raw data to include an I420 or NV12 format; the data event is a rendering event triggered by original data generated by video decoding; (2) UI event processing: at this moment, UI operation representing the change of the video display effect is triggered, and the rendering module can adjust internal parameters such as matrixes, vertexes and numerical values according to the corresponding operation; the UI event is an event triggered by a change in image effect by a UI operation such as enlargement, reduction, rotation, or the like; (3) error or exit event handling: at this time, the rendering module encounters an unrecoverable error or an external module triggers an exit operation, and the rendering module performs resource recovery and exits the main loop; step 4, entering a vertical synchronization state, if the current timestamp is more than 1/60 second from the last rendering start time, directly proceeding to the next step, otherwise, if the current timestamp is a data event, performing frame loss operation, if the current timestamp is a UI event, entering a timing queue, and rendering at a time point which is more than or equal to the vertical synchronization interval; step 5, entering a rendering state, binding the current frame with the graph api according to the default parameters, and then calling the graph api for rendering; step 6, checking the current event queue, jumping to step 3 if other events exist, otherwise returning to step 1, and enabling the video rendering module to enter a sleep state;
the events are divided into two types, one type is a rendering event triggered by original data generated by video decoding, the data event is referred to as a data event for short, and the other type is an event triggered by image effect changes such as amplification, reduction, rotation and the like of UI (user interface) operation, the rendering module enters a dormant state under the condition of no event trigger, and thus ineffective rendering is not performed, so that performance waste is avoided;
similar to the data event, the UI event is triggered by the user's UI operations such as clicking, kneading, touch dragging, etc., to immediately wake up the rendering module, change the rendering parameters, and then perform vertical synchronization and rendering.
CN202010899491.3A 2020-09-01 2020-09-01 Method for improving video rendering efficiency and user interaction fluency Active CN111954067B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010899491.3A CN111954067B (en) 2020-09-01 2020-09-01 Method for improving video rendering efficiency and user interaction fluency

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010899491.3A CN111954067B (en) 2020-09-01 2020-09-01 Method for improving video rendering efficiency and user interaction fluency

Publications (2)

Publication Number Publication Date
CN111954067A CN111954067A (en) 2020-11-17
CN111954067B true CN111954067B (en) 2022-10-04

Family

ID=73367663

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010899491.3A Active CN111954067B (en) 2020-09-01 2020-09-01 Method for improving video rendering efficiency and user interaction fluency

Country Status (1)

Country Link
CN (1) CN111954067B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115550709B (en) * 2022-01-07 2023-09-26 荣耀终端有限公司 Data processing method and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040181611A1 (en) * 2003-03-14 2004-09-16 Viresh Ratnakar Multimedia streaming system for wireless handheld devices
CN103177744A (en) * 2011-12-21 2013-06-26 深圳市快播科技有限公司 Low-power-consumption playing method and device for mobile equipment
CN106296566A (en) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 A kind of virtual reality mobile terminal dynamic time frame compensates rendering system and method
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
CN110945849A (en) * 2017-04-21 2020-03-31 泽尼马克斯媒体公司 System and method for encoder hint based rendering and precoding load estimation
CN111163345A (en) * 2018-11-07 2020-05-15 杭州海康威视系统技术有限公司 Image rendering method and device

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040181611A1 (en) * 2003-03-14 2004-09-16 Viresh Ratnakar Multimedia streaming system for wireless handheld devices
CN103177744A (en) * 2011-12-21 2013-06-26 深圳市快播科技有限公司 Low-power-consumption playing method and device for mobile equipment
CN106296566A (en) * 2016-08-12 2017-01-04 南京睿悦信息技术有限公司 A kind of virtual reality mobile terminal dynamic time frame compensates rendering system and method
CN110945849A (en) * 2017-04-21 2020-03-31 泽尼马克斯媒体公司 System and method for encoder hint based rendering and precoding load estimation
CN110035328A (en) * 2017-11-28 2019-07-19 辉达公司 Dynamic dithering and delay-tolerant rendering
CN111163345A (en) * 2018-11-07 2020-05-15 杭州海康威视系统技术有限公司 Image rendering method and device

Also Published As

Publication number Publication date
CN111954067A (en) 2020-11-17

Similar Documents

Publication Publication Date Title
TWI431465B (en) Method, article of manufacture, apparatus and system for regulating power consumption
CN111479016B (en) Terminal use duration reminding method and device, terminal and storage medium
CN111954067B (en) Method for improving video rendering efficiency and user interaction fluency
WO2014101418A1 (en) Video preview display method and terminal device
CN110858827A (en) Broadcast starting acceleration method and device and computer readable storage medium
US20140092109A1 (en) Computer system and method for gpu driver-generated interpolated frames
WO2022001452A1 (en) Information display method and apparatus, wearable device, and storage medium
CN111108470A (en) Whole wall redisplay method and device for distributed splicing system and computer equipment
CN109656639B (en) Interface rolling method, device, equipment and medium
CN106484348B (en) Animation drawing method and system based on synchronous signals
CN108769815B (en) Video processing method and device
CN104717509A (en) Method and device for decoding video
CN106569573B (en) Display method and device, display control method and device, and equipment
CN114125498A (en) Video data processing method, device, equipment and storage medium
US11513937B2 (en) Method and device of displaying video comments, computing device, and readable storage medium
US8773442B2 (en) Aligning animation state update and frame composition
JP5212110B2 (en) Moving image photographing apparatus with zoom function, image processing and display method and program
CN114937118A (en) Model conversion method, apparatus, device and medium
CN110557627B (en) Performance monitoring method and device and storage medium
CN112118473B (en) Video bullet screen display method and device, computer equipment and readable storage medium
CN114327714A (en) Application program control method, device, equipment and medium
WO2019184142A1 (en) Information prompting method, electronic apparatus, terminal device, and storage medium
CN113778425A (en) Method for playing ppt animation in browser based on canvas
CN115841711B (en) Control method and device of automobile data recorder, automobile data recorder and medium
CN113490044B (en) Video playing method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant