CN114786051B - Video rendering method and device, electronic equipment and storage medium - Google Patents
Video rendering method and device, electronic equipment and storage medium Download PDFInfo
- Publication number
- CN114786051B CN114786051B CN202210344075.6A CN202210344075A CN114786051B CN 114786051 B CN114786051 B CN 114786051B CN 202210344075 A CN202210344075 A CN 202210344075A CN 114786051 B CN114786051 B CN 114786051B
- Authority
- CN
- China
- Prior art keywords
- rendering
- region
- special effect
- current
- target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000009877 rendering Methods 0.000 title claims abstract description 253
- 238000000034 method Methods 0.000 title claims abstract description 55
- 230000000694 effects Effects 0.000 claims abstract description 84
- 230000004044 response Effects 0.000 claims description 9
- 238000012545 processing Methods 0.000 abstract description 9
- 238000005516 engineering process Methods 0.000 abstract description 5
- 238000004590 computer program Methods 0.000 description 10
- 238000010586 diagram Methods 0.000 description 10
- 238000004891 communication Methods 0.000 description 9
- 230000006870 function Effects 0.000 description 4
- 230000009471 action Effects 0.000 description 3
- 239000000463 material Substances 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 238000010428 oil painting Methods 0.000 description 3
- 230000003287 optical effect Effects 0.000 description 3
- 230000008569 process Effects 0.000 description 3
- 230000003993 interaction Effects 0.000 description 2
- 238000005457 optimization Methods 0.000 description 2
- 230000000007 visual effect Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 230000008676 import Effects 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44012—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/431—Generation of visual interfaces for content selection or interaction; Content or additional data rendering
- H04N21/4312—Generation of visual interfaces for content selection or interaction; Content or additional data rendering involving specific graphical features, e.g. screen layout, special fonts or colors, blinking icons, highlights or animations
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/439—Processing of audio elementary streams
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N21/00—Selective content distribution, e.g. interactive television or video on demand [VOD]
- H04N21/40—Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
- H04N21/43—Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
- H04N21/44—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
- H04N21/44016—Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving splicing one content stream with another content stream, e.g. for substituting a video clip
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Processing Or Creating Images (AREA)
Abstract
The application discloses a video rendering method and device, electronic equipment and a storage medium, relates to the field of computer vision, particularly relates to an image processing technology, and can be applied to a cloud computing scene. The specific implementation scheme is as follows: receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a time period from a start rendering time to an end rendering time; determining an interesting region corresponding to the current rendering time interval in the target video; and rendering the region of interest corresponding to the current rendering time interval. The method and the device for rendering the image can meet the requirement of regional rendering with smaller granularity, and achieve the effect of rendering more accurately.
Description
Technical Field
The disclosure relates to the field of computer vision, and further relates to an image processing technology, which can be applied to a cloud computing scene, in particular to a video rendering method, a video rendering device, an electronic device, and a storage medium.
Background
With the development of electronic technology and computer technology, various types of terminals for capturing and processing videos and pictures have been greatly popularized. People can not only use professional video cameras and cameras to shoot high-quality pictures and videos, but also can shoot and play videos through various mobile phones anytime and anywhere. The image rendering is an important process for generating program visual pictures, and is widely applied to application programs with picture rendering requirements, such as games and the like; the image rendering is mainly implemented by executing image rendering logic on a picture rendering object, and the image rendering object is a rendering object for image rendering, such as a picture object to be rendered, such as a button control, an art word control, a background picture and the like.
In the current video special effect rendering method, after a filter and a special effect are used, the whole canvas at the current moment or the whole picture of a single video completely participates in rendering of the filter and the special effect. By adopting the rendering mode, the granularity of the filter and the special effect scope is too large, and the regional special effect rendering with smaller granularity cannot be accurately met.
Disclosure of Invention
The disclosure provides a video rendering method, a video rendering device, an electronic device and a storage medium.
In a first aspect, the present application provides a video rendering method, including:
receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time;
determining a region of interest corresponding to the current rendering time interval in the target video;
and rendering the region of interest corresponding to the current rendering time interval.
In a second aspect, the present application provides a video rendering apparatus, the apparatus comprising: the device comprises a receiving module, a determining module and a rendering module; wherein,
the receiving module is used for receiving a current rendering time interval selected by a user in a track axis corresponding to the target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time;
the determining module is configured to determine, in the target video, a region of interest corresponding to the current rendering time interval;
and the rendering module is used for rendering the region of interest corresponding to the current rendering time interval.
In a third aspect, an embodiment of the present application provides an electronic device, including:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement a video rendering method as described in any of the embodiments of the present application.
In a fourth aspect, the present application provides a storage medium, on which a computer program is stored, where the computer program, when executed by a processor, implements a video rendering method according to any of the embodiments of the present application.
In a fifth aspect, a computer program product is provided, which when executed by a computer device implements the video rendering method of any of the embodiments of the present application.
According to the technical scheme provided by the application, the requirement of regional rendering with smaller granularity can be met, and the more accurate rendering effect is achieved.
It should be understood that the statements in this section are not intended to identify key or critical features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The drawings are included to provide a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
fig. 1 is a first flowchart of a video rendering method according to an embodiment of the present disclosure;
fig. 2 is a second flowchart of a video rendering method according to an embodiment of the present disclosure;
fig. 3 is a third flow diagram of a video rendering method according to an embodiment of the present application;
FIG. 4 is a diagram illustrating a structure of a target video scope provided by an embodiment of the present application;
FIG. 5 is a diagram illustrating a structure of a canvas scope provided by an embodiment of the present application;
fig. 6 is a schematic structural diagram of a video rendering apparatus according to an embodiment of the present application;
fig. 7 is a block diagram of an electronic device for implementing a video rendering method according to an embodiment of the present application.
Detailed Description
Exemplary embodiments of the present disclosure are described below with reference to the accompanying drawings, in which various details of the embodiments of the disclosure are included to assist understanding, and which are to be considered as merely exemplary. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Example one
Fig. 1 is a first flowchart of a video rendering method according to an embodiment of the present disclosure, where the method may be executed by a video rendering apparatus or an electronic device, where the apparatus or the electronic device may be implemented by software and/or hardware, and the apparatus or the electronic device may be integrated in any intelligent device with a network communication function. As shown in fig. 1, the video rendering method may include the steps of:
s101, receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time.
In this step, the electronic device may receive a current rendering period selected by a user in a track axis corresponding to the target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time. Specifically, the user can firstly import materials such as videos and pictures; then dragging the material to be edited into a track shaft; different material may correspond to respective track axes, e.g., subtitles, tiles, media, audio, etc. The user may select a time period as the current rendering period, i.e. the current rendering period is the time period from the starting rendering time to the ending rendering time. For example, the user may select that the current rendering period is from 00:00:10:00 start, to 00:00:30:00 is finished.
S102, determining an interesting area corresponding to the current rendering time period in the target video.
In this step, the electronic device may determine, in the target video, a region of interest corresponding to the current rendering period. Specifically, the electronic device may first receive an instruction sent by a user to manually select an area of interest; then, receiving a dragging operation of a user in a video frame corresponding to the current rendering time period in response to an instruction of manually selecting the region of interest; selecting one or more interested areas in the target video as the interested areas corresponding to the current rendering time period based on the dragging operation; or, the electronic device may also receive an instruction sent by the user to automatically select the region of interest; and then responding to an instruction for automatically selecting the interested regions to extract one or more interested regions from the video frame corresponding to the current rendering time interval as the interested regions corresponding to the current rendering time interval.
S103, rendering the region of interest corresponding to the current rendering time interval.
In this step, the electronic device may render the region of interest corresponding to the current rendering time period. Specifically, the electronic device may extract a time point within a current rendering time period as a current rendering time; then rendering the region of interest corresponding to the current rendering moment; and repeating the above operations until the region of interest corresponding to each time point in the current rendering period is rendered. More specifically, if the target special effect is added to the target video, the electronic device may render a region of interest corresponding to a current rendering time period based on the target special effect in a display area of the target video; if the target special effect is added on the preview window canvas, the electronic device may render the region of interest corresponding to the current rendering time period on the preview window canvas based on the target special effect.
The video rendering method provided by the embodiment of the application comprises the steps of firstly receiving a current rendering time interval selected by a user in an orbit axis corresponding to a target video; then determining an interested area corresponding to the current rendering time interval in the target video; and rendering the region of interest corresponding to the current rendering time interval. That is to say, the region of interest corresponding to the current rendering time period is rendered in the target video, not the whole target video, nor the whole preview window canvas. In the existing video rendering method, after the filter and the special effect are used, the whole canvas at the current moment or the whole picture of a single video completely participates in rendering of the filter and the special effect. Because the technical means of selecting the current rendering time interval, determining the region of interest and rendering the region of interest are adopted, the technical problem that the granularity action domain of a filter and a special effect in the prior art can only be a certain video in the current canvas or canvas and cannot meet the regional rendering with smaller granularity is solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example two
Fig. 2 is a second flowchart of a video rendering method according to an embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 2, the video rendering method may include the steps of:
s201, receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time.
S202, receiving an instruction sent by a user to manually select the region of interest.
In this step, the electronic device may receive an instruction sent by the user to manually select the region of interest. Specifically, in the embodiment of the present application, a manner of manually selecting an area of interest may be adopted, and a manner of automatically selecting an area of interest may also be selected. In the method of manually selecting the region of interest, the electronic device may first receive an instruction sent by a user to manually select the region of interest; then, a dragging operation of a user in a video frame corresponding to the current rendering time interval is received in response to an instruction of manually selecting the region of interest; and selecting one or more interested areas in the target video as the interested areas corresponding to the current rendering period based on the dragging operation. Optionally, in the manner of automatically selecting the region of interest, the electronic device may first receive an instruction sent by the user to automatically select the region of interest; and then responding to an instruction for automatically selecting the interested regions to extract one or more interested regions from the video frame corresponding to the current rendering time interval as the interested regions corresponding to the current rendering time interval.
S203, receiving a dragging operation of a user in a video frame corresponding to the current rendering time period in response to the instruction of manually selecting the region of interest.
In this step, the electronic device may receive, in response to an instruction for manually selecting the region of interest, a drag operation of the user in the video frame corresponding to the current rendering period. Specifically, the region of interest in the embodiment of the present application may be a rectangular region, and each vertex of the rectangular region may be represented as a coordinate of (X, Y); the size of the rectangular region can be expressed as (Width, height). Therefore, the user can perform a drag operation on the target video, so as to select one or more rectangular regions as the regions of interest corresponding to the current rendering period. Optionally, the user may further select two coordinates on the target video, which are respectively used as the coordinate of the top left corner vertex and the coordinate of the bottom right corner vertex of the region of interest, and the region of interest may be determined based on the coordinates of the top left corner vertex and the coordinates of the bottom right corner vertex.
S204, one or more regions of interest are selected from the target video based on the dragging operation and serve as the regions of interest corresponding to the current rendering period.
And S205, receiving a special effect selection instruction sent by a user.
In this step, the electronic device may receive a special effect selection instruction sent by the user. Specifically, the user may send a special effect selection instruction for a certain special effect in the filter special effect list. The filter effect list may include N effects, which are: special effect 1, special effect 2, \ 8230, special effect N; wherein N is a natural number greater than 1. The special effects in the embodiment of the present application may include, but are not limited to: oil painting, warm, delicious, colorless, nostalgic, retro, yellow, etc.; in addition, the special effect in the embodiment of the present application may further include a default option, and a specific rendering manner of the default option may be preconfigured.
S206, responding to the special effect selection instruction, and determining a special effect as a target special effect from the N special effect options; wherein N is a natural number greater than 1.
And S207, rendering the region of interest corresponding to the current rendering time period based on the target special effect.
In this step, the electronic device may render the region of interest corresponding to the current rendering period based on the target special effect. Specifically, in a specific rendering process, the electronic device may extract a time point in a current rendering time period as a current rendering time; then rendering the region of interest corresponding to the current rendering moment; and repeatedly executing the operation until the region of interest corresponding to each time point in the current rendering time interval is rendered.
The video rendering method provided by the embodiment of the application comprises the steps of firstly receiving a current rendering time interval selected by a user in an orbit axis corresponding to a target video; then determining an interested area corresponding to the current rendering time interval in the target video; and rendering the region of interest corresponding to the current rendering time interval. That is to say, the region of interest corresponding to the current rendering time period is rendered in the target video, not the whole target video, nor the whole preview window canvas. In the existing video rendering method, after the filter and the special effect are used, the whole canvas at the current moment or the whole picture of a single video completely participates in rendering of the filter and the special effect. Because the technical means of selecting the current rendering time period, determining the region of interest and rendering the region of interest are adopted, the technical problem that the granularity scope of a filter and a special effect in the prior art can only be a certain video in the current canvas or canvas and cannot meet the regional rendering with smaller granularity is solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
EXAMPLE III
Fig. 3 is a third flowchart of a video rendering method according to an embodiment of the present application. Further optimization and expansion are performed based on the technical scheme, and the method can be combined with the various optional embodiments. As shown in fig. 3, the video rendering method may include the steps of:
s301, receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time.
S302, receiving an instruction sent by a user to manually select the region of interest.
S303, receiving a dragging operation of a user in a video frame corresponding to the current rendering time period in response to the instruction of manually selecting the region of interest.
S304, one or more regions of interest are selected from the target video based on the dragging operation and serve as the regions of interest corresponding to the current rendering period.
S305, receiving a special effect selection instruction sent by a user.
S306, responding to a special effect selection instruction, and determining a special effect as a target special effect in the N special effect options; wherein N is a natural number greater than 1.
And S307, receiving a mode selection instruction sent by a user.
In this step, the electronic device may receive a mode selection instruction sent by the user; the mode selection instruction may include the following two types: a first mode selection instruction and a second mode selection instruction; when a user sends a first mode selection instruction, the adding mode selected by the user is represented as a target video scope; when the user sends a second mode selection instruction, the adding mode selected by the user is represented as the canvas scope.
Fig. 4 is a schematic structural diagram of a target video scope according to an embodiment of the present application. As shown in fig. 4, the scope of the target video in the embodiment of the present application means that the special effect selected by the user is only applied to the target video. For example, assuming that the target video is video 1 and the special effect selected by the user is oil painting, the region of interest is rendered on the target video 1 in an oil painting manner. In this mode, the region of interest may not exceed the display area of the target video because filters and special effects are added to the video.
FIG. 5 is a schematic structural diagram of a canvas scope provided in an embodiment of the present application. As shown in FIG. 5, canvas scope refers to a user selecting a special effect to act on the entire preview window canvas. For example, assuming that the entire preview window canvas includes video 1 and video 2, and the special effect selected by the user is a canvas, the region of interest may be rendered in the form of a canvas on the target video 1 and the target video 2. In this mode, the region of interest can act on the entire preview window canvas.
S308, responding to the mode selection instruction, and determining one adding mode as a target adding mode in two predetermined adding modes; wherein the two predetermined adding modes comprise: a target video scope and a canvas scope.
And S309, rendering the region of interest corresponding to the current rendering time period based on the target adding mode and the target special effect.
The video rendering method provided by the embodiment of the application comprises the steps of firstly receiving a current rendering time interval selected by a user in an orbit axis corresponding to a target video; then determining an interested area corresponding to the current rendering time interval in the target video; and rendering the region of interest corresponding to the current rendering time interval. That is to say, the region of interest corresponding to the current rendering time period is rendered in the target video, not the whole target video, nor the whole preview window canvas. In the existing video rendering method, after the filter and the special effect are used, the whole canvas at the current moment or the whole picture of a single video is completely involved in rendering the filter and the special effect. Because the technical means of selecting the current rendering time interval, determining the region of interest and rendering the region of interest are adopted, the technical problem that the granularity action domain of a filter and a special effect in the prior art can only be a certain video in the current canvas or canvas and cannot meet the regional rendering with smaller granularity is solved; moreover, the technical scheme of the embodiment of the application is simple and convenient to implement, convenient to popularize and wide in application range.
Example four
Fig. 6 is a schematic structural diagram of a video rendering apparatus according to an embodiment of the present application. As shown in fig. 6, the apparatus 600 includes: a receiving module 601, a determining module 602 and a rendering module 603; wherein,
the receiving module 601 is configured to receive a current rendering time period selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a starting rendering time to an ending rendering time;
the determining module 602 is configured to determine a region of interest corresponding to the current rendering period in the target video;
the rendering module 603 is configured to render the region of interest corresponding to the current rendering time period.
Further, the receiving module 601 is further configured to receive a special effect selection instruction sent by the user; responding to the special effect selection instruction to determine a special effect as a target special effect in the N special effect options; wherein N is a natural number greater than 1;
the rendering module 603 is specifically configured to perform an operation of rendering the region of interest corresponding to the current rendering time period based on the target special effect.
Further, the receiving module 601 is further configured to receive a mode selection instruction sent by the user; responding to the mode selection instruction to determine one adding mode as a target adding mode in two predetermined adding modes; wherein the predetermined two addition modes include: a target video scope and a canvas scope;
the rendering module 603 is specifically configured to perform an operation of rendering the region of interest corresponding to the current rendering period based on the target adding mode and the target special effect.
Further, the rendering module 603 is specifically configured to, if the target special effect is added to the target video, render, in a display area of the target video, an area of interest corresponding to the current rendering time period based on the target special effect; and if the target special effect is added on a preview window canvas, rendering an area of interest corresponding to the current rendering time period on the preview window canvas based on the target special effect.
Further, the determining module 602 is specifically configured to receive an instruction sent by the user to manually select an area of interest; receiving a dragging operation of the user in a video frame corresponding to the current rendering time period in response to the instruction of manually selecting the region of interest; selecting one or more regions of interest in the target video as regions of interest corresponding to the current rendering period based on the dragging operation; or receiving an instruction for automatically selecting the region of interest sent by the user; and responding to the instruction for automatically selecting the interested regions, and extracting one or more interested regions from the video frame corresponding to the current rendering time interval as the interested regions corresponding to the current rendering time interval.
Further, the rendering module 603 is specifically configured to extract a time point within the current rendering time period as a current rendering time; rendering the region of interest corresponding to the current rendering moment; and repeating the above operations until the region of interest corresponding to each time point in the current rendering period is rendered.
The video rendering device can execute the method provided by any embodiment of the application, and has the corresponding functional modules and beneficial effects of the execution method. For details of the technology that are not described in detail in this embodiment, reference may be made to a video rendering method provided in any embodiment of the present application.
EXAMPLE five
The present disclosure also provides an electronic device, a readable storage medium, and a computer program product according to embodiments of the present disclosure.
FIG. 7 illustrates a schematic block diagram of an example electronic device 700 that can be used to implement embodiments of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be examples only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 7, the device 700 comprises a computing unit 701, which may perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 702 or a computer program loaded from a storage unit 708 into a Random Access Memory (RAM) 703. In the RAM 703, various programs and data required for the operation of the device 700 can also be stored. The computing unit 701, the ROM 702, and the RAM 703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
A number of components in the device 700 are connected to the I/O interface 705, including: an input unit 706 such as a keyboard, a mouse, or the like; an output unit 707 such as various types of displays, speakers, and the like; a storage unit 708 such as a magnetic disk, optical disk, or the like; and a communication unit 709 such as a network card, modem, wireless communication transceiver, etc. The communication unit 709 allows the device 700 to exchange information/data with other devices via a computer network, such as the internet, and/or various telecommunication networks.
Various implementations of the systems and techniques described here above may be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), system on a chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs that are executable and/or interpretable on a programmable system including at least one programmable processor, which may be special or general purpose, receiving data and instructions from, and transmitting data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a display device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to a user; and a keyboard and a pointing device (e.g., a mouse or a trackball) by which a user may provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user can be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user can be received in any form, including acoustic, speech, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a back-end component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such back-end, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), blockchain networks, and the Internet.
The computer system may include clients and servers. A client and server are generally remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical host and VPS service are overcome.
It should be understood that various forms of the flows shown above may be used, with steps reordered, added, or deleted. For example, the steps described in the present disclosure may be executed in parallel, sequentially, or in different orders, and are not limited herein as long as the desired results of the technical solutions disclosed in the present disclosure can be achieved. In the technical scheme of the disclosure, the acquisition, storage, application and the like of the personal information of the related user all accord with the regulations of related laws and regulations, and do not violate the customs of public sequences.
The above detailed description should not be construed as limiting the scope of the disclosure. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may be made in accordance with design requirements and other factors. Any modification, equivalent replacement, and improvement made within the spirit and principle of the present disclosure should be included in the scope of protection of the present disclosure.
Claims (10)
1. A method of video rendering, the method comprising:
receiving a current rendering time interval selected by a user in a track axis corresponding to a target video; wherein the current rendering period is a period of time from a starting rendering time to an ending rendering time;
determining a region of interest corresponding to the current rendering time interval in the target video;
rendering the region of interest corresponding to the current rendering time period, including:
extracting a time point in the current rendering time interval as the current rendering time;
rendering the region of interest corresponding to the current rendering moment; repeatedly executing the operations until the region of interest corresponding to each time point in the current rendering time interval is rendered;
wherein the determining the region of interest corresponding to the current rendering period in the target video comprises:
receiving an instruction of manually selecting an interested area sent by the user; responding to the instruction of manually selecting the region of interest to receive the dragging operation of the user in the video frame corresponding to the current rendering time interval; selecting one or more regions of interest in the target video as regions of interest corresponding to the current rendering period based on the dragging operation;
or receiving an instruction for automatically selecting the region of interest sent by the user; and responding to the instruction for automatically selecting the interested regions, and extracting one or more interested regions from the video frame corresponding to the current rendering time interval as the interested regions corresponding to the current rendering time interval.
2. The method of claim 1, prior to rendering the region of interest corresponding to the current rendering period, further comprising:
receiving a special effect selection instruction sent by the user;
determining a special effect as a target special effect in the N special effect options in response to the special effect selection instruction; wherein N is a natural number greater than 1; and executing the operation of rendering the region of interest corresponding to the current rendering time interval based on the target special effect.
3. The method of claim 2, prior to rendering the region of interest corresponding to the current rendering period based on the target special effect, further comprising:
receiving a mode selection instruction sent by the user;
responding to the mode selection instruction to determine one adding mode as a target adding mode in two predetermined adding modes; wherein the predetermined two addition modes include: a target video scope and a canvas scope; and performing an operation of rendering the region of interest corresponding to the current rendering time period based on the target adding mode and the target special effect.
4. The method of claim 3, rendering a region of interest corresponding to the current rendering period based on the target addition mode and the target special effect, comprising: the method comprises the following steps:
if the target special effect is added to the target video, rendering an interesting region corresponding to the current rendering time period based on the target special effect in a display region of the target video;
and if the target special effect is added on a preview window canvas, rendering an interested area corresponding to the current rendering time period on the preview window canvas based on the target special effect.
5. A video rendering device, the device comprising: the device comprises a receiving module, a determining module and a rendering module; wherein,
the receiving module is used for receiving a current rendering time interval selected by a user in a track axis corresponding to the target video; wherein the current rendering period is a period of time from a start rendering time to an end rendering time;
the determining module is configured to determine, in the target video, a region of interest corresponding to the current rendering time interval;
the rendering module is used for rendering the region of interest corresponding to the current rendering time interval;
the determining module is specifically configured to receive an instruction sent by the user to manually select an area of interest; receiving a dragging operation of the user in a video frame corresponding to the current rendering time period in response to the instruction of manually selecting the region of interest; selecting one or more regions of interest in the target video as regions of interest corresponding to the current rendering period based on the dragging operation; or receiving an instruction of automatically selecting the region of interest sent by the user; responding to the instruction for automatically selecting the interested regions, and extracting one or more interested regions from the video frame corresponding to the current rendering time interval as the interested regions corresponding to the current rendering time interval;
the rendering module is specifically configured to extract a time point within the current rendering time period as a current rendering time; rendering the region of interest corresponding to the current rendering moment; and repeating the above operations until the region of interest corresponding to each time point in the current rendering period is rendered.
6. The apparatus according to claim 5, wherein the receiving module is further configured to receive a special effect selection instruction sent by the user; determining a special effect as a target special effect in the N special effect options in response to the special effect selection instruction; wherein N is a natural number greater than 1;
the rendering module is specifically configured to perform an operation of rendering the region of interest corresponding to the current rendering period based on the target special effect.
7. The apparatus according to claim 6, wherein the receiving module is further configured to receive a mode selection instruction sent by the user; responding to the mode selection instruction to determine one adding mode as a target adding mode in two predetermined adding modes; wherein the predetermined two addition modes include: a target video scope and a canvas scope;
the rendering module is specifically configured to perform an operation of rendering the region of interest corresponding to the current rendering period based on the target adding mode and the target special effect.
8. The apparatus according to claim 7, wherein the rendering module is specifically configured to, if the target special effect is added to the target video, render, in a display area of the target video, a region of interest corresponding to the current rendering period based on the target special effect; and if the target special effect is added on a preview window canvas, rendering an area of interest corresponding to the current rendering time period on the preview window canvas based on the target special effect.
9. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-4.
10. A non-transitory computer readable storage medium having stored thereon computer instructions for causing the computer to perform the method of any one of claims 1-4.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210344075.6A CN114786051B (en) | 2022-03-31 | 2022-03-31 | Video rendering method and device, electronic equipment and storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210344075.6A CN114786051B (en) | 2022-03-31 | 2022-03-31 | Video rendering method and device, electronic equipment and storage medium |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114786051A CN114786051A (en) | 2022-07-22 |
CN114786051B true CN114786051B (en) | 2023-04-14 |
Family
ID=82427410
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210344075.6A Active CN114786051B (en) | 2022-03-31 | 2022-03-31 | Video rendering method and device, electronic equipment and storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114786051B (en) |
Family Cites Families (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20130024113A1 (en) * | 2011-07-22 | 2013-01-24 | Robert Bosch Gmbh | Selecting and Controlling the Density of Objects Rendered in Two-Dimensional and Three-Dimensional Navigation Maps |
EP3345184A1 (en) * | 2015-09-02 | 2018-07-11 | THOMSON Licensing | Method, apparatus and system for facilitating navigation in an extended scene |
US10360721B2 (en) * | 2016-05-26 | 2019-07-23 | Mediatek Inc. | Method and apparatus for signaling region of interests |
US10169843B1 (en) * | 2017-11-20 | 2019-01-01 | Advanced Micro Devices, Inc. | Temporal foveated rendering using motion estimation |
CN111756996A (en) * | 2020-06-18 | 2020-10-09 | 影石创新科技股份有限公司 | Video processing method, video processing apparatus, electronic device, and computer-readable storage medium |
-
2022
- 2022-03-31 CN CN202210344075.6A patent/CN114786051B/en active Active
Also Published As
Publication number | Publication date |
---|---|
CN114786051A (en) | 2022-07-22 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113453073B (en) | Image rendering method and device, electronic equipment and storage medium | |
EP3876197A2 (en) | Portrait extracting method and apparatus, electronic device and storage medium | |
CN113359995B (en) | Man-machine interaction method, device, equipment and storage medium | |
CN113691864A (en) | Video clipping method, video clipping device, electronic equipment and readable storage medium | |
CN114610295A (en) | Layout method, device, equipment and medium for page container | |
CN115908687A (en) | Method and device for training rendering network, method and device for rendering network, and electronic equipment | |
CN114168793A (en) | Anchor display method, device, equipment and storage medium | |
CN113780297A (en) | Image processing method, device, equipment and storage medium | |
CN114786051B (en) | Video rendering method and device, electronic equipment and storage medium | |
CN116866661A (en) | Video prerendering method, device, equipment and storage medium | |
CN112465692A (en) | Image processing method, device, equipment and storage medium | |
CN113627363B (en) | Video file processing method, device, equipment and storage medium | |
CN113627354B (en) | A model training and video processing method, which comprises the following steps, apparatus, device, and storage medium | |
CN113873323B (en) | Video playing method, device, electronic equipment and medium | |
CN114760526A (en) | Video rendering method and device, electronic equipment and storage medium | |
CN113836455A (en) | Special effect rendering method, device, equipment, storage medium and computer program product | |
CN113810755A (en) | Panoramic video preview method and device, electronic equipment and storage medium | |
CN113784217A (en) | Video playing method, device, equipment and storage medium | |
EP2821997A1 (en) | Method and device for editing a video sequence | |
CN113744414B (en) | Image processing method, device, equipment and storage medium | |
CN113542620B (en) | Special effect processing method and device and electronic equipment | |
CN114549697B (en) | Image processing method, device, equipment and storage medium | |
CN114546199B (en) | Image processing method, device, electronic equipment and storage medium | |
CN114820882B (en) | Image acquisition method, device, equipment and storage medium | |
CN113490044B (en) | Video playing method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |