CN113542808B - Video processing method, apparatus, device and computer readable medium - Google Patents

Video processing method, apparatus, device and computer readable medium Download PDF

Info

Publication number
CN113542808B
CN113542808B CN202010285036.4A CN202010285036A CN113542808B CN 113542808 B CN113542808 B CN 113542808B CN 202010285036 A CN202010285036 A CN 202010285036A CN 113542808 B CN113542808 B CN 113542808B
Authority
CN
China
Prior art keywords
image frame
pixel
weighting
pixel point
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010285036.4A
Other languages
Chinese (zh)
Other versions
CN113542808A (en
Inventor
邱晔
么敬国
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
New Oriental Education Technology Group Co ltd
Original Assignee
New Oriental Education Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by New Oriental Education Technology Group Co ltd filed Critical New Oriental Education Technology Group Co ltd
Priority to CN202010285036.4A priority Critical patent/CN113542808B/en
Publication of CN113542808A publication Critical patent/CN113542808A/en
Application granted granted Critical
Publication of CN113542808B publication Critical patent/CN113542808B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

A video processing method, apparatus, device, and computer readable medium are disclosed. The video processing method comprises the following steps: determining a first weighting parameter for each second pixel point of a second image frame in a second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size are different; rounding the first weighting parameters based on predefined rounding parameters, determining second weighting parameters for each second pixel point in the second image frame; for each second image frame in the second video, determining a pixel value for each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value for at least one first pixel point in a first image frame of the first video that corresponds to the second image frame; and outputting the second image frame according to the pixel value of each second pixel point in the second image frame.

Description

Video processing method, apparatus, device and computer readable medium
Technical Field
The present disclosure relates to the field of video processing, and in particular, to a video processing method, apparatus, device, and computer readable medium.
Background
With the development of technology, more and more scenes are applied to video, and the resolution of video images is continuously improved. In practical application scenarios, such as video playing, video communication, live broadcasting, etc., multiple paths of videos may exist. In this case, a large amount of computing resources are consumed when scaling the video.
Disclosure of Invention
To this end, the present disclosure provides a video processing method, apparatus, device, and computer readable medium.
According to an aspect of the present disclosure, there is provided a video processing method including: determining a first weighting parameter for each second pixel point of a second image frame in a second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size are different; rounding the first weighting parameters based on predefined rounding parameters, determining second weighting parameters for each second pixel point in the second image frame; for each second image frame in the second video, determining a pixel value for each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value for at least one first pixel point in a first image frame of the first video that corresponds to the second image frame; and outputting the second image frame according to the pixel value of each second pixel point in the second image frame.
In some embodiments, determining the first weighting parameter for each second pixel point of the second image frame in the second video based on the first size for the first video and the second size for the second video comprises: for each second pixel point in the second image frame, determining a location of a mapping point corresponding to the second pixel point in a first image frame of the first video corresponding to the second image frame based on the first size and the second size; determining at least one first pixel point used for determining the second pixel point in the first image frame according to the position of the mapping point, and determining the first weighting parameter according to the mapping point and the position of each first pixel point in the at least one first pixel point used for the second pixel point based on a predefined mapping relation, wherein the first weighting parameter comprises at least one first weighting element respectively used for each first pixel point in the at least one first pixel point.
In some embodiments, the second weighting parameters include at least one second weighting element for each of the at least one first pixel point, respectively, and determining the second weighting parameters for each of the second pixel points in the second image frame based on the predefined rounding parameters and the first weighting parameters includes: for each first weighting element of the at least one first weighting element, multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
In some embodiments, the rounding parameter is to the power n of 2, n is an integer greater than 1, and multiplying the first weighting element by the rounding parameter to obtain the second weighting element comprises: the first weighting element is shifted left by n bits to obtain the second weighting element.
In some embodiments, the second weighting parameter includes at least one second weighting element for each of the at least one first pixel, and determining the pixel value for each of the second pixels in the second image frame based on the second weighting parameter and the pixel value for at least one of the first image frames in the first video corresponding to the second image frame includes: for each second pixel in the second image frame, weight-averaging pixel values for at least one first pixel for the second pixel based on at least one second weighting element for the second pixel to determine a second weighted average for at least one first pixel for the second pixel; performing an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average; and determining the first weighted average value as the pixel value of the second pixel point.
In some embodiments, weighting the pixel values for the at least one first pixel of the second pixel based on the at least one second weighting element for the second pixel to determine a second weighted average for the at least one first pixel of the second pixel comprises: performing weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction to determine a second weighted average in the horizontal direction; the second weighted average in the horizontal direction is weighted averaged based on the second weighted element in the vertical direction to obtain a second weighted average for at least one first pixel of the second pixel.
In some embodiments, the method further comprises: determining a third weighting parameter for each second pixel of the scaled second image frame in the second video based on the first size and the third size in response to the second video scaling from the second size to the third size; rounding the third weighting parameters based on predefined rounding parameters, and determining fourth weighting parameters for each second pixel point in the scaled second image frame; for each of the scaled second image frames in the second video, determining a pixel value for each second pixel point in the scaled second image frame based on the fourth weighting parameter and a pixel value for at least one first pixel point in a first image frame of the first video corresponding to the scaled second image frame; outputting the scaled second image frame according to the pixel value of each second pixel point in the scaled second image frame.
According to another aspect of the present disclosure, there is also provided a video processing apparatus including: a first weighting parameter determination unit configured to determine a first weighting parameter for each second pixel point of a second image frame in a second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size are different; a second weighting parameter determination unit configured to perform a rounding operation on the first weighting parameter based on a predefined rounding parameter, determining a second weighting parameter for each second pixel point in the second image frame; a second image frame determination unit configured to: for each second image frame in the second video, determining a pixel value for each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value for at least one first pixel point in a first image frame of the first video that corresponds to the second image frame; and outputting the second image frame according to the pixel value of each second pixel point in the second image frame.
In some embodiments, the first weighting parameter determination unit is configured to: for each second pixel point in the second image frame, determining a location of a mapping point corresponding to the second pixel point in a first image frame of the first video corresponding to the second image frame based on the first size and the second size; determining at least one first pixel point used for determining the second pixel point in the first image frame according to the position of the mapping point, and determining the first weighting parameter according to the mapping point and the position of each first pixel point in the at least one first pixel point used for the second pixel point based on a predefined mapping relation, wherein the first weighting parameter comprises at least one first weighting element respectively used for each first pixel point in the at least one first pixel point.
In some embodiments, the second weighting parameter comprises at least one second weighting element for each of the at least one first pixel point, respectively, the second weighting parameter determination unit being configured to: for each first weighting element of the at least one first weighting element, multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
In some embodiments, the rounding parameter is to the power n of 2, n is an integer greater than 1, and multiplying the first weighting element by the rounding parameter to obtain the second weighting element comprises: the first weighting element is shifted left by n bits to obtain the second weighting element.
In some embodiments, the second weighting parameter comprises at least one second weighting element for each of the at least one first pixel point, respectively, the second image frame determination unit being configured to: for each second pixel in the second image frame, weight-averaging pixel values for at least one first pixel for the second pixel based on at least one second weighting element for the second pixel to determine a second weighted average for at least one first pixel for the second pixel; performing an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average; and determining the first weighted average value as the pixel value of the second pixel point.
In some embodiments, weighting the pixel values for the at least one first pixel of the second pixel based on the at least one second weighting element for the second pixel to determine a second weighted average for the at least one first pixel of the second pixel comprises: performing weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction to determine a second weighted average in the horizontal direction; the second weighted average in the horizontal direction is weighted averaged based on the second weighted element in the vertical direction to obtain a second weighted average for at least one first pixel of the second pixel.
According to still another aspect of the present disclosure, there is also provided a video processing apparatus including: a processor; and a memory in which computer readable program instructions are stored, wherein the computer readable program instructions, when executed by the processor, perform a video processing method as described above.
According to yet another aspect of the present disclosure, there is also provided a computer-readable storage medium having stored thereon computer-readable instructions which, when executed by a computer, perform the video processing method as described above.
By utilizing the video processing method, the video processing device and the computer readable medium, the computing resources required by video scaling can be saved and the video processing performance can be improved by optimizing the steps in the video scaling process.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present disclosure, and other drawings may be obtained according to these drawings without making creative efforts to one of ordinary skill in the art. The following drawings are not intended to be drawn to scale on actual dimensions, emphasis instead being placed upon illustrating the principles of the disclosure.
FIG. 1 illustrates an exemplary scene graph of a video processing system in accordance with the present application;
FIG. 2 shows a schematic flow chart of a video processing method according to an embodiment of the disclosure;
FIG. 3 illustrates a schematic diagram of a process of determining mapping points according to an embodiment of the present disclosure;
the curve shape of the function W (t) in the case of a = -0.5 is shown in fig. 4;
FIG. 5 illustrates an example of 2 sets of BGRA data parallel operations in accordance with embodiments of the present disclosure;
FIG. 6 shows a schematic flow of another video processing process according to an embodiment of the present disclosure;
fig. 7 shows a schematic block diagram of a video processing apparatus according to an embodiment of the present application; and
fig. 8 shows a schematic block diagram of a computing device according to an embodiment of the disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the present disclosure more apparent, the present disclosure is further described in detail by the following examples. It will be apparent that the described embodiments are merely some, but not all embodiments of the present disclosure. Based on the embodiments in this disclosure, all other embodiments that a person of ordinary skill in the art would obtain without making any inventive effort are within the scope of protection of this disclosure.
In general, the size of the source video received by the computing device is fixed, while the size of the video output (e.g., displayed) by the computing device may be determined in response to user input. Thus, the size of the output target video and the size of the source video may be different. In the case where the target video and the source video are different in size, it is therefore necessary to perform scaling processing on the source video to meet the output requirement.
Video scaling may be achieved using a variety of interpolation algorithms, for example, nearest neighbor interpolation, bilinear interpolation, bicubic interpolation may be used. By using the interpolation algorithm, a better video scaling effect can be obtained. In order to output the target video, each frame in the source video needs to be processed using the interpolation algorithm described above. Such scaling processes are computationally intensive and slow. More computing resources are consumed in the scaling process.
In the case that the actual application scenario includes multiple videos, the process of scaling the multiple videos in real time consumes more resources.
In order to improve the process of video scaling processing, the present disclosure provides a new video processing method.
Fig. 1 shows an exemplary scene graph of a video processing system according to the present application. As shown in fig. 1, the video processing system 100 may include a client 110, a network 120, a server 130, and a database 140.
The client 110 may be, for example, a computer 110-1, a cell phone 110-2 as shown in FIG. 1. It will be appreciated that in fact, the client may be any other type of electronic device capable of performing data processing, which may include, but is not limited to, a desktop computer, a notebook computer, a tablet computer, a smart phone, a smart home device, a wearable device, and the like.
The client provided according to the application may be used for receiving the first video as a source video. Wherein the first video may comprise at least one first image frame. The size of the first video is the first size. The size of the video referred to herein may be the size in pixels for each image frame in the video, and thus, a larger video size means a higher resolution for each image frame in the video, that is, a higher resolution for the video.
The client may capture the first video to be processed by an image capture device (e.g., camera, video camera, etc.) provided on the client. For another example, the client may also acquire the first video from a separately provided image capture device (e.g., camera, video camera, scanner, etc.). As another example, the client may also receive the first video from the server via the network.
The video processing method provided by the present disclosure may be performed using the video processing system 100 shown in fig. 1 to scale a first video of a first size to a second video of a second size. Wherein the first size and the second size may be different. In some embodiments, the video processing method provided by the present disclosure may be performed by a processing unit of a client. In some implementations, a client may perform the video processing methods provided by the present disclosure using an application built into the client. In other implementations, the client may perform the video processing methods provided by the present disclosure by invoking an application program stored external to the client.
In other embodiments, the client transmits the first image frames in the received first video to the server 130 via the network 120, and the server 130 performs the video processing method described above to scale the first video of the first size to the second video of the second size. In some implementations, the server 130 may perform the video processing method described above using an application built into the server. In other implementations, the server 130 may perform the video processing method described above by invoking an application program stored external to the server.
The second video of the second size obtained by the method can be used for outputting and displaying on a display or outputting to other electronic equipment for subsequent further processing.
Network 120 may be a single network or a combination of at least two different networks. For example, network 120 may include, but is not limited to, one or a combination of several of a local area network, a wide area network, a public network, a private network, and the like.
The server 130 may be a single server or a group of servers, each server within the group being connected via a wired or wireless network. A server farm may be centralized, such as a data center, or distributed. The server 130 may be local or remote.
Database 140 may refer broadly to a device having a storage function. The database 130 is mainly used to store various data utilized, generated, and outputted from the operation of the client 110 and the server 130. Database 140 may be local or remote. The database 140 may include various memories such as random access Memory (Random Access Memory (RAM)), read Only Memory (ROM), and the like. The above-mentioned storage devices are merely examples and the storage devices that may be used by the system are not limited thereto.
Database 140 may be interconnected or in communication with server 130 or a portion thereof via network 120, or directly with server 130, or a combination thereof.
In some embodiments, database 140 may be a stand-alone device. In other embodiments, database 140 may also be integrated in at least one of client 110 and server 130. For example, database 140 may be located on client 110 or server 130. For another example, database 140 may be distributed, with one portion being located on client 110 and another portion being located on server 130.
The flow of the video processing method provided by the application will be described in detail hereinafter.
Fig. 2 shows a schematic flow chart of a video processing method according to an embodiment of the present disclosure. With the video processing method shown in fig. 2, a first video of a first size can be scaled to a second video of a second size. The first video includes at least one first image frame of a first size and the second video includes at least one second image frame of a second size. Each first image frame comprises a plurality of first pixel points which are sequentially arranged. Each second image frame comprises a plurality of second pixel points which are sequentially arranged. Wherein each second image frame is obtained by scaling one of the first image frames in the first video.
As shown in fig. 2, in step S202, a first weighting parameter for each second pixel point of a second image frame in a second video may be determined based on a first size for the first video and a second size for the second video, wherein the first size and the second size may be different.
It will be appreciated that in general, the first size of a first video that is a source video is fixed, while the second size of a second video that is a target video is determined according to the actual application scenario. For example, if the second video is displayed on a display device, such as a display screen, the second size of the second video may be compatible with the size of the display device. For example, the second size of the second video may be the same size as a display window on the display device for displaying the second video. In some examples, in the case of a full screen display, the size of the display window is the same as the size of the display screen of the display device, and the second size may also be the same as the size of the display screen. In other examples, a size of a display window for displaying the second video may be adjusted in response to the user input, wherein the size of the display window is smaller than a size of a display screen of the display device. In this case, the second size is the same as the size of the display window.
In the case where the first size of the first video and the second size of the second video are determined, the first weighting parameter may be determined based on the first size and the second size. Wherein the first weighting parameter may be used to scale a first image frame in the first video.
In some embodiments, the first weighting parameter may be used to weight average a pixel value of at least one first pixel point in a first image frame in the first video to obtain a pixel value of each second pixel point in a corresponding second image frame in the second video.
For example, in the case where the first size and the second size are the same, the number of pixels in the second image frame and the corresponding first image frame is the same. Therefore, the pixel value information of each pixel point in the first image frame may be directly output as the second image frame. However, in the case where the first size and the second size are different, the number of second pixel points included in the second image frame is different from the number of first pixel points included in the corresponding first image frame. Therefore, in order to scale the first image frame to obtain the corresponding second image frame, pixel value information of a greater or lesser number of second pixel points needs to be generated according to the pixel value information of each first pixel point in the first image frame. Thus, the first image frame may be processed using an interpolation algorithm, such as a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, a bicubic interpolation algorithm, or the like.
The principles of the present disclosure will be described below by taking bicubic interpolation as an example. It is to be understood that the present disclosure is not limited to the particular form of interpolation algorithm, and that other interpolation algorithms may be employed by those skilled in the art to practice the principles of the present disclosure without departing from the principles of the present disclosure.
In some implementations, step S202 may include: for each second pixel point in the second image frame, a location of a mapping point corresponding to the second pixel point in a first image frame in the first video corresponding to the second image frame may be determined based on the first size and the second size.
Fig. 3 shows a schematic diagram of a process of determining a mapping point according to an embodiment of the present disclosure.
As shown in fig. 3, taking the size of the first image frame as scrw×scrh and the size of the second image frame as dstw×dsth as an example, a point P (x, y) in the second image frame and a mapping point P (x, y) of the point P (x, y) in the first image frame satisfy the following correspondence:
where scrW is the width of the first image frame, scrH is the height of the first image frame, dstW is the width of the second image frame, scrH is the height of the second image frame, px is the abscissa of point P (x, y) in the rectangular frame, py is the ordinate of point P (x, y) in the rectangular frame, px is the abscissa of point P (x, y) in the rectangular frame, and Py is the ordinate of point P (x, y) in the rectangular frame.
The coordinates of the mapping point P (x, y) of P (x, y) in the first image frame may be determined based on the coordinates of the point P (x, y) in the second image frame based on the correspondence described above, that is:
Px=px*(srcW/dstW)(3)
Py=py*(srcW/dstW)(4)
as shown in fig. 3, in the case where the size of the first image frame is 5*5 and the size of the second image frame is 3*3, it can be determined that the coordinates of the mapping point of the pixel point P (1, 1) in the second image frame in the first image frame are P (1.6).
Referring back to fig. 2, after determining the position of the mapping point, step S202 may further include determining at least one pixel point in the first image frame for determining the second pixel point according to the position of the mapping point.
As described above, in the case where the first size and the second size are different, it is necessary to generate pixel value information of a greater or lesser number of second pixel points using pixel value information of each first pixel point in the first image frame.
The pixel values of the at least two first pixel points around the map point determined in the above manner may be determined as the pixel values of the corresponding second pixel points.
In some implementations, the first weighting parameter may be determined based on a predefined mapping relationship from the mapping point and a location of each of at least one first pixel point for the second pixel point. Wherein the first weighting parameter comprises at least one first weighting element for each of the at least one first pixel point, respectively.
The principles of the present disclosure are described below by taking the example that the predefined mapping relationship is a bicubic interpolation algorithm.
In the case of using the bicubic interpolation algorithm, the pixel values of the corresponding second pixel point may be determined using the pixel values of 16 pixel points around the map point. A pixel to which the mapping point P (x, y) in the first image frame belongs may be determined. For example, in the example shown in fig. 3, the point P (1.6) is located within the range of the pixel point P (1, 1). Then, 16 pixel points for determining the pixel value of the corresponding second pixel point may be determined based on the pixel point P (1, 1). For example, for any one pixel point P (X, Y), 16 pixel points P (X-1, Y-1), P (X-1, Y), P (X-1, y+1), P (X-1, y+2), P (X, Y-1), P (X, Y), P (X, y+1), P (X, y+2), P (x+1, Y-1), P (x+1, Y), P (x+1, y+1), P (x+1, y+2), P (x+2, Y-1), P (x+2, Y), P (x+2, y+1), P (X-1, y+2) may be determined as 16 pixel points for determining the pixel value of the corresponding second pixel point, wherein X, Y is a positive integer greater than 0.
In some examples, for a mapping point P (x, y), the pixel value f (x, y) corresponding to the mapping point, that is, the pixel value of the second pixel point in the second image frame corresponding to the mapping point, may be determined by:
Where f (x, y) is the pixel value of the mapping point P (x, y), i, j is the index number, f (x) i ,y j ) Is the point P (x i ,y j ) Is a pixel value of (a). Wherein x is i 、y j The abscissa and the ordinate of the 16 first pixel points determined based on the mapping point P (x, y) are represented, respectively. Taking the example that the mapping point P (x, y) is P (1.6), x 0 Equal to 0, x 1 Equal to 1, x 2 Equal to 2, x 3 Equal to 3, y 0 Equal to 0, y 1 Equal to 1, y 2 Equal to 2, y 3 Equal to 3.
The first weighting parameter may be determined by equation (6). For example, as shown in equation (5), W (x-x) i ) And W (y-y) j ) Is expressed by the product of each pixel value f (x i ,y j ) I.e. for pixel point P (x i ,y j ) Is included in the first weighting element of (a). Equation (6) shows an exemplary form of W, where W (x-x i ) The representation will be (x-x i ) Substituting the argument t into the formula (6). Similarly, W (y-y j ) The representation will be (y-y j ) Substituting the argument t into the formula (6).
The parameter a in the formula (6) is a predefined parameter. In some examples, a= -0.5. It can be appreciated that a person skilled in the art can also adjust the value of a according to the actual situation, so as to improve the scaling effect of the video. The curve shape of the function W (t) in the case of a = -0.5 is shown in fig. 4. As shown in fig. 4, W (0) has a maximum value within the value range of W (t), and the value of W (t) starts to decrease as the value of t gradually gets farther from the point where t=0. Based on the curve shape of W (t) shown in FIG. 4, when (x-x i )、(y-y j ) Substitution of W (t) as argument t) At the time point P (x i ,y j ) Closer to the point P (x, y), W (x-x) i ) And W (y-y) j ) The value of (2) is also larger, and point P (x i ,y j ) At a longer distance from point P (x, y), W (x-x) i ) And W (y-y) j ) The value of (2) is also small and even negative numbers may occur.
The curve of the W (t) function shown in fig. 4 or any other function having similar properties thereto can be used to determine the value f (x i ,y j ) Wherein the point P (x) is closer to the point P (x, y) i ,y j ) Has a higher weight and a point P (x) farther from the point P (x, y) i ,y j ) Has a lower weight and even has a negative weight.
Equation (5) may be used to weight average the pixel values of the first pixel points around the mapped point based on the first weighting parameter.
According to equation (5), its pseudocode expands as follows:
the above procedure involves two loops and is therefore disadvantageous for fast calculations. To save computational efficiency in this process, the above-described two-cycle can be converted into two one-dimensional computational processes.
In some embodiments, for point P (x, y), 16 points around the point may be grouped in rows or columns. For example, four first pixel points located in the first row among the 16 points may be processed using equation (7):
Where j=0, 1, 2 or 3, h j (x) Is to use W (x-x) for four first pixel points of the j-th row i ) The weight of (2) is weighted and summed to obtain the result.
Then, h can be calculated using equation (8) j (x) Feeding inLine processing:
based on the formulas (7), (8), the pixel value for the point P (x, y) obtained in the formula (5) can be calculated in two one-dimensional manners.
It will be appreciated that each of the first weighting elements in the first weighting parameters determined using equation (6) above may be floating point numbers. Processing floating point number computations with a computer consumes a significant amount of computing resources. In order to optimize floating point number computations involved in the above process, the present disclosure proposes a method of optimizing floating point number computations.
In step S204, a rounding operation may be performed on the first weighting parameter based on a predefined rounding parameter, and a second weighting parameter for each second pixel point in the second image frame is determined. Wherein the second weighting parameter comprises at least one second weighting element for each of the at least one first pixel point, respectively.
For example, for each of the first weighting elements described above, the rounding operation may include multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
In some implementations, the rounding parameter may be a power n of 2, where n is an integer greater than 1. In the case of binary processing of the calculated numbers, multiplying the first weighting element by the rounding parameter to obtain a second weighting element may comprise shifting the first weighting element left by n bits to obtain the second weighting element.
For a number of 2, the process of multiplying this number by 2 can be accomplished by left shifting this number. By using the method, complex multiplication operation in the calculation process can be avoided.
In some implementations, the video scaling process provided by the present application may be implemented using the SSE instruction set. The SSE instruction set supports a bit width of 128 bits, where 128 bits (bits) are equal to 16 bytes (bytes) and also equal to 8 words (words).
In the process of realizing video scaling by using SSE instruction set, since the image data of video is composed of R (red) G (green) B (blue) A (transparency Alpha) data, the value range of RGBA is 0 to 255. And the range of values for a word is [ -32768, 32767]. Thus, when computation in the video scaling process is implemented with SSE instructions, computation of 8 words of data can be performed in parallel each time, which is equivalent to parallel operation of 2 sets of BGRA data.
Fig. 5 illustrates an example of 2 sets of BGRA data parallel operations according to embodiments of the present disclosure.
As shown in fig. 5, the second weighting element determined by the foregoing method may be multiplied by the pixel value RGBA of the corresponding first pixel point, respectively, to obtain a calculation result of the RGBA value for the first pixel point.
As can be seen from the formula (5), in order to obtain the pixel value of the corresponding second pixel point, the calculation result of each first pixel point needs to be added. In order that the intermediate calculation result does not overflow, the maximum value range of the calculation result of the RGBA value of each pixel point can be only half of the value range of one word, namely [ -16384,16383]. Therefore, considering the maximum value range of the calculation result and the value range of the RGBA data, the maximum value of the weighting element can be found to be 16383/255=64.25. Thus, the rounding parameter may be determined to be 64, i.e., 6 th power of 2.
It will be appreciated that one skilled in the art may set the rounding parameter to other values without departing from the principles of the disclosure.
Referring back to fig. 2, in step S206, for each second image frame in the second video, a pixel value of each second pixel point in the second image frame may be determined based on the second weighting parameter, the rounding parameter, and a pixel value of at least one first pixel point in a first image frame in the first video that corresponds to the second image frame.
As described above, to simplify floating point number operations during computation, the first weighting parameters may be processed based on rounding parameters to obtain integer second weighting parameters. Step S206 may include, for each second pixel in the second image frame, weighted averaging pixel values for at least one first pixel of the second pixel based on at least one second weighting element for the second pixel to determine a second weighted average for at least one first pixel of the second pixel.
In some embodiments, the first weighting parameter in equation (5) may be replaced with the second weighting parameter determined in step S204 to obtain a second weighted average for at least one first pixel of the second pixel.
In some implementations, weighted averaging the pixel values for the at least one first pixel of the second pixel based on the at least one second weighted element for the second pixel to determine a second weighted average for the at least one first pixel of the second pixel may include: and carrying out weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction so as to determine a second weighted average in the horizontal direction. For example, a second weighted average in the horizontal direction may be determined based on equation (7).
The second weighted average in the horizontal direction is weighted averaged based on the second weighted element in the vertical direction to obtain a second weighted average for at least one first pixel of the second pixel. For example, the second weighted average may be determined based on equation (8).
Then, step S206 may include performing an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average. The first weighted average may be determined as the pixel value of the second pixel point.
As described above, in order to simplify the floating point number operation in the calculation process, the first weighting parameter is rounded into the second weighting parameter in step S204. In this case, the second weighted average value obtained by weighted-averaging at least one first pixel point in the first image frame using the second weighting parameter is not a true pixel value result. Thus, the inverse rounding operation may include dividing the second weighted average by the rounding parameter to obtain a first weighted average corresponding to the second weighted average. Wherein in the binary case, where the rounding parameter is to the power n of 2, the inverse rounding operation may comprise shifting the second weighted average left by n bits.
Taking the rounding parameter as 64 as an example, the rounding operation may include shifting the first weighting element left by 6 bits to obtain a corresponding second weighting element. At this time, the inverse rounding operation may include right-shifting the second weighted average by 6 bits to obtain a corresponding first weighted average. By using the method, a large amount of calculation resources required by floating point number calculation can be avoided in the calculation process, and a final calculation result can be simply obtained after the calculation result is obtained. Then, the second image frame may be output according to the pixel value of each second pixel point in the second image frame.
With the video processing method provided by the present disclosure, a weighting parameter for scaling can be determined for the second video to be output, and each image frame is processed with the calculated weighting parameter. By rounding the floating point number during video processing, a large number of floating point number operations involved in the calculation process are avoided. In addition, by reducing the dimension of the computation process involving the two-dimensional loop to the one-dimensional loop, a significant amount of computing resources are also saved.
Fig. 6 shows a schematic flow of another video processing procedure according to an embodiment of the present disclosure.
It is considered that the resolution (i.e., size) of the second video does not change frequently when the second video is output. Thus, the second weighting parameter for the second video may be unchanged without receiving an indication that the second size of the second video has changed. Thus, in some embodiments, the second weighting parameters obtained in step S204 shown in fig. 2 may be stored in an array.
Based on the stability of the size of the second videos, the second weighting parameter for each second pixel point does not need to be calculated for each second video in the second videos before the size of the second videos changes, so that the calculated amount of data per frame is greatly reduced.
In step S602, a third weighting parameter for each second pixel point of the scaled second image frame in the second video may be determined based on the first size and the third size in response to the second video scaling from the second size to the third size. Wherein, the third weighting parameter may be determined based on the first size and the third size using a similar procedure to step S202 in fig. 2, as long as the second size in the method shown in fig. 2 is replaced with the third size. Wherein the third weighting parameter may be used to scale the first image frame in the first video to a scaled second image frame having a third size.
In some implementations, step S602 may include: for each second pixel point in the scaled second image frame, a location of a mapping point corresponding to the second pixel point in the first video corresponding to the scaled second image frame may be determined based on the first size and the third size. Taking the size of the first image frame as scrw×scrh and the size of the scaled second image frame as dstW ' ×dsth ' as an example, the point P ' (x, y) in the scaled second image frame and the mapping point P ' (x, y) of the point P ' (x, y) in the first image frame satisfy the following correspondence:
Where scrW is the width of the first image frame, scrH is the height of the first image frame, dstW 'is the width of the scaled second image frame, scrH' is the height of the scaled second image frame, px 'is the abscissa of point P' (x, y) in the rectangular frame, py 'is the ordinate of point P' (x, y) in the rectangular frame, px 'is the abscissa of point P' (x, y) in the rectangular frame, and Py 'is the ordinate of point P' (x, y) in the rectangular frame.
Based on the above correspondence, the coordinates of the mapping point P ' (x, y) of P ' (x, y) in the first image frame, that is, px ' =px ' ((srcW/dstW '), py ' =py ' ((srcW/dstW '), may be determined based on the coordinates of the point P ' (x, y) in the scaled second image frame.
After determining the position of the mapping point, step S602 may further include determining at least one pixel point in the first image frame for determining the second pixel point in the scaled second image frame according to the position of the mapping point.
The pixel values of the at least two first pixel points around the mapped point determined in the above manner may be determined as the pixel values of the second pixel points in the corresponding scaled second image frame.
In some implementations, the third weighting parameter may be determined based on a predefined mapping relationship from the mapping point and a location of each of at least one first pixel for the second pixel in the scaled second image frame. Wherein the third weighting parameter comprises at least one third weighting element for each of the at least one first pixel point, respectively.
In case of using bicubic interpolation algorithm as a predefined mapping relation, pixel values P' (x) of 16 pixel points around the mapping point may be utilized i ,y j ) And determining a pixel value of the corresponding second pixel point, wherein i=0, 1,2,3, j=0, 1,2,3. W (x-x) can be determined based on formulas (5), (6) i ) And W (y-y) j ) Is the product of (x) for each pixel point P' (x) i ,y j ) Pixel value f' (x) i ,y j ) I.e. for pixel point P' (x) i ,y j ) Is included in the first weighting element.
In step S604, a rounding operation may be performed on the third weighting parameter based on a predefined rounding parameter, and a fourth weighting parameter for each second pixel point in the scaled second image frame may be determined. Wherein the fourth weighting parameter comprises at least one fourth weighting element for each of the at least one first pixel point, respectively.
For example, for each of the third weighting elements described above, the rounding operation may include multiplying the third weighting element by the rounding parameter to obtain a fourth weighting element, wherein the fourth weighting element is an integer. In some implementations, the rounding parameter may be a power n of 2, where n is an integer greater than 1. In the case of binary processing of the calculated number, multiplying the third weighting element by the rounding parameter to obtain a fourth weighting element may comprise shifting the third weighting element left by n bits to obtain the fourth weighting element. In some embodiments, the rounding parameter may be determined to be 64, i.e., 6 th power of 2. It will be appreciated that one skilled in the art may set the rounding parameter to other values without departing from the principles of the disclosure.
In some implementations, the rounding parameter may be a power n of 2, where n is an integer greater than 1. In the case of binary processing of the calculated number, multiplying the third weighting element by the rounding parameter to obtain a fourth weighting element may comprise shifting the third weighting element left by n bits to obtain the fourth weighting element.
In step S606, for each of the scaled second image frames in the second video, a pixel value of each second pixel point in the scaled second image frame may be determined based on the fourth weighting parameter and a pixel value of at least one first pixel point in the first video corresponding to the first image frame of the scaled second image frame. The scaled second image frame may then be output based on the pixel value of each second pixel point in the scaled second image frame.
In some embodiments, step S606 may include, for each second pixel in each second image frame in the scaled second video, weighted averaging pixel values for at least one first pixel for the second pixel based on at least one fourth weighting element for the second pixel to determine a fourth weighted average for at least one first pixel for the second pixel.
In some embodiments, the fourth weighting parameter determined in step S604 may be substituted into W (x-x) in equation (5) i )W(y-y j ) To obtain a fourth weighted average of the at least one first pixel for the second pixel.
In some implementations, weighted averaging the pixel values for the at least one first pixel of the second pixel based on the at least one fourth weighted element for the second pixel to determine a fourth weighted average for the at least one first pixel of the second pixel may include: and carrying out weighted average on the pixel values of the first pixel points in the horizontal direction based on the fourth weighted element in the horizontal direction so as to determine a fourth weighted average in the horizontal direction. For example, a fourth weighted average in the horizontal direction may be determined based on equation (7).
The fourth weighted average in the horizontal direction is weighted averaged based on the second weighted element in the vertical direction to obtain a fourth weighted average for at least one first pixel of the second pixel. For example, the fourth weighted average described above may be determined based on equation (8).
Then, step S606 may include performing an inverse rounding operation on the fourth weighted average based on the rounding parameter to determine a third weighted average corresponding to the fourth weighted average. The third weighted average may be determined as the pixel value of the second pixel point.
As described above, in order to simplify the floating point number operation in the calculation process, the third weighting parameter is rounded into the fourth weighting parameter in step S604. In this case, the fourth weighted average value obtained by weighted-averaging at least one first pixel point in the first image frame using the fourth weighting parameter is not a true pixel value result. Thus, the inverse rounding operation may include dividing the fourth weighted average by the rounding parameter to obtain a third weighted average corresponding to the fourth weighted average. Wherein in the binary case, where the rounding parameter is to the power n of 2, the inverse rounding operation may comprise shifting the fourth weighted average left by 6 bits.
Taking the rounding parameter as 64 as an example, the inverse rounding operation may include right shifting the fourth weighted average by 6 bits to obtain a corresponding third weighted average. By using the method, a large amount of calculation resources required by floating point number calculation can be avoided in the calculation process, and a final calculation result can be simply obtained after the calculation result is obtained. The second image frame may then be output based on the pixel value of each second pixel point in the second image frame in the scaled second video.
Fig. 7 shows a schematic block diagram of a video processing apparatus according to an embodiment of the present application. As shown in fig. 7, the video processing apparatus 700 may include a first weighting parameter determination unit 710, a second weighting parameter determination unit 720, and a second image frame determination unit 730. The units shown in fig. 7 may be implemented by electronic units or modules integrated in the same electronic device, or may be implemented by electronic units or modules dispersed in different electronic devices.
The first weighting parameter determination unit 710 may be configured to determine a first weighting parameter for each second pixel point of the second image frame in the second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size may be different.
In the case where the first size of the first video and the second size of the second video are determined, the first weighting parameter may be determined based on the first size and the second size. Wherein the first weighting parameter may be used to scale a first image frame in the first video.
In some embodiments, the first weighting parameter may be used to weight average a pixel value of at least one first pixel point in a first image frame in the first video to obtain a pixel value of each second pixel point in a corresponding second image frame in the second video.
For example, in the case where the first size and the second size are the same, the number of pixels in the second image frame and the corresponding first image frame is the same. Therefore, the pixel value information of each pixel point in the first image frame may be directly output as the second image frame. However, in the case where the first size and the second size are different, the number of second pixel points included in the second image frame is different from the number of first pixel points included in the corresponding first image frame. Therefore, in order to scale the first image frame to obtain the corresponding second image frame, pixel value information of a greater or lesser number of second pixel points needs to be generated according to the pixel value information of each first pixel point in the first image frame. Thus, the first image frame may be processed using an interpolation algorithm, such as a nearest neighbor interpolation algorithm, a bilinear interpolation algorithm, a bicubic interpolation algorithm, or the like.
In some implementations, the first weighting parameter determination unit 710 may be configured to determine, for each second pixel point in the second image frame, a position of a mapping point corresponding to the second pixel point in a first image frame corresponding to the second image frame in the first video based on the first size and the second size.
After determining the position of the mapping point, the first weighting parameter determination unit 710 may be further configured to determine at least one pixel point in the first image frame for determining the second pixel point according to the position of the mapping point.
As described above, in the case where the first size and the second size are different, it is necessary to generate pixel value information of a greater or lesser number of second pixel points using pixel value information of each first pixel point in the first image frame.
The pixel values of the at least two first pixel points around the map point determined in the above manner may be determined as the pixel values of the corresponding second pixel points.
In some implementations, the first weighting parameter may be determined based on a predefined mapping relationship from the mapping point and a location of each of at least one first pixel point for the second pixel point. Wherein the first weighting parameter comprises at least one first weighting element for each of the at least one first pixel point, respectively.
The principles of the present disclosure are described below by taking the example that the predefined mapping relationship is a bicubic interpolation algorithm.
In the case of using the bicubic interpolation algorithm, the pixel values of the corresponding second pixel point may be determined using the pixel values of 16 pixel points around the map point. A pixel to which the mapping point P (x, y) in the first image frame belongs may be determined. For example, in the example shown in fig. 3, the point P (1.6) is located within the range of the pixel point P (1, 1). Then, 16 pixel points for determining the pixel value of the corresponding second pixel point may be determined based on the pixel point P (1, 1). For example, for any one pixel point P (X, Y), 16 pixel points P (X-1, Y-1), P (X-1, Y), P (X-1, y+1), P (X-1, y+2), P (X, Y-1), P (X, Y), P (X, y+1), P (X, y+2), P (x+1, Y-1), P (x+1, Y), P (x+1, y+1), P (x+1, y+2), P (x+2, Y-1), P (x+2, Y), P (x+2, y+1), P (X-1, y+2) may be determined as 16 pixel points for determining the pixel value of the corresponding second pixel point, wherein X, Y is a positive integer greater than 0.
In some examples, for the mapping point P (x, y), the pixel value f (x, y) corresponding to the mapping point, that is, the pixel value of the second pixel point in the second image frame corresponding to the mapping point, may be determined by formulas (5), (6).
In some embodiments, for point P (x, y), 16 points around the point may be grouped in rows or columns. For example, four first pixel points located in the first row among the 16 points may be processed using equation (7). Then, h obtained by equation (7) can be obtained by using equation (8) j (x) And (5) processing. Based on the formulas (7), (8), the pixel value for the point P (x, y) obtained in the formula (5) can be calculated in two one-dimensional manners.
It will be appreciated that each of the first weighting elements in the first weighting parameters determined using equation (6) above may be floating point numbers. Processing floating point number computations with a computer consumes a significant amount of computing resources. In order to optimize floating point number computations involved in the above process, the present disclosure proposes a method of optimizing floating point number computations.
The second weighting parameter determination unit 720 may be configured to perform a rounding operation on the first weighting parameter based on a predefined rounding parameter, and determine a second weighting parameter for each second pixel point in the second image frame. Wherein the second weighting parameter comprises at least one second weighting element for each of the at least one first pixel point, respectively.
For example, for each of the first weighting elements described above, the rounding operation may include multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
In some implementations, the rounding parameter may be a power n of 2, where n is an integer greater than 1. In the case of binary processing of the calculated numbers, multiplying the first weighting element by the rounding parameter to obtain a second weighting element may comprise shifting the first weighting element left by n bits to obtain the second weighting element.
For a number of 2, the process of multiplying this number by 2 can be accomplished by left shifting this number. By using the method, complex multiplication operation in the calculation process can be avoided.
In some implementations, the video scaling process provided by the present application may be implemented using the SSE instruction set. The SSE instruction set supports a bit width of 128 bits, where 128 bits (bits) are equal to 16 bytes (bytes) and also equal to 8 words (words).
In the process of realizing video scaling by using SSE instruction set, since the image data of video is composed of R (red) G (green) B (blue) A (transparency Alpha) data, the value range of RGBA is 0 to 255. And the range of values of a word is [ -32768,32767]. Thus, when computation in the video scaling process is implemented with SSE instructions, computation of 8 words of data can be performed in parallel each time, which is equivalent to parallel operation of 2 sets of BGRA data. The rounding parameter may be determined to be 64, i.e. 6 th power of 2.
It will be appreciated that one skilled in the art may set the rounding parameter to other values without departing from the principles of the disclosure.
For each second image frame in the second video, the second image frame determination unit 730 may be configured to determine a pixel value of each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value of at least one first pixel point in the first image frame corresponding to the second image frame in the first video.
As described above, to simplify floating point number operations during computation, the first weighting parameters may be processed based on rounding parameters to obtain integer second weighting parameters. The second image frame determining unit 730 may be configured to include, for each second pixel point in the second image frame, performing a weighted average on pixel values of at least one first pixel point for the second pixel point based on at least one second weighting element for the second pixel point to determine a second weighted average of at least one first pixel point for the second pixel point.
In some embodiments, the first weighting parameter in equation (5) may be replaced with the second weighting parameter determined in step S204 to obtain a second weighted average for at least one first pixel of the second pixel.
In some implementations, weighted averaging the pixel values for the at least one first pixel of the second pixel based on the at least one second weighted element for the second pixel to determine a second weighted average for the at least one first pixel of the second pixel may include: and carrying out weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction so as to determine a second weighted average in the horizontal direction. For example, a second weighted average in the horizontal direction may be determined based on equation (7).
The second weighted average in the horizontal direction is weighted averaged based on the second weighted element in the vertical direction to obtain a second weighted average for at least one first pixel of the second pixel. For example, the second weighted average may be determined based on equation (8).
Then, the second image frame determining unit 730 may be configured to perform an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average. The first weighted average may be determined as the pixel value of the second pixel point.
With the video processing apparatus provided by the present disclosure, it is possible to determine a weighting parameter for scaling for the second video to be output, and process each image frame with the calculated weighting parameter. By rounding the floating point number during video processing, a large number of floating point number operations involved in the calculation process are avoided. In addition, by reducing the dimension of the computation process involving the two-dimensional loop to the one-dimensional loop, a significant amount of computing resources are also saved.
FIG. 8 illustrates a schematic block diagram of a computing device. The video processing apparatus shown in fig. 7 may be implemented using the computing device shown in fig. 8. As shown in fig. 8, computing device 800 may include a bus 810, one or more CPUs 820, a Read Only Memory (ROM) 830, a Random Access Memory (RAM) 840, a communication port 850 connected to a network, an input/output component 860, a hard disk 870, and the like. A storage device in computing device 800, such as ROM 830 or hard disk 870, may store various data or files used by the computer processing and/or communications and program instructions executed by the CPU. Computing device 800 may also include a user interface 880. For example, the results output by the video processing apparatus as described above may be displayed to the user through the user interface 880. Of course, the architecture shown in FIG. 8 is merely exemplary, and one or more components of the computing device shown in FIG. 8 may be omitted as may be practical in implementing different devices.
According to one aspect of the present disclosure, the video processing method provided by the present disclosure may be implemented using program instructions stored in a computer readable medium. A computer-readable medium may take many forms, including tangible storage media, carrier wave media, or physical transmission media. The stable storage medium may include: optical or magnetic disks, and other computers or similar devices, can implement the storage system of the system components depicted in the figures. The unstable storage media may include dynamic memory, such as the main memory of a computer platform, and the like. Tangible transmission media may include coaxial cables, copper wire and fiber optics, such as the wires that form a bus within a computer system. Carrier wave transmission media can convey electrical, electromagnetic, acoustic or optical signals, etc. These signals may be generated by means of radio frequency or infrared data communication. Typical computer-readable media include hard disks, floppy disks, magnetic tape, any other magnetic media; CD-ROM, DVD, DVD-ROM, any other optical medium; punch cards, any other physical storage medium containing a small pore pattern; RAM, PROM, EPROM, FLASH-EPROM, any other memory chip or tape; a carrier wave transmitting data or instructions, a cable or connection means for a carrier wave, any other program code and/or data that can be read by a computer. In the form of such computer-readable media, there are numerous ways of presenting a processor in the course of executing instructions, delivering one or more results.
A "module" in the present application may refer to logic or a set of software instructions stored in hardware, firmware. The term "module" as referred to herein can be implemented by software and/or hardware modules or can be stored in any one of a variety of computer-readable non-transitory media or other storage devices. In some embodiments, a software module may be compiled and connected into an executable program. It will be apparent that the software modules herein may respond to information conveyed by themselves or by other modules, and/or may respond upon detection of certain events or interrupts. A software module may be provided on a computer readable medium that may be configured to perform operations on a computing device (e.g., processor 220). The computer readable medium herein may be an optical disc, a digital optical disc, a flash memory disc, a magnetic disk, or any other kind of tangible medium. The software modules may also be obtained in a digital download mode (where digital downloads also include data stored in compressed packages or installation packages, requiring decompression or decoding operations prior to execution). The code of the software modules herein may be stored, in part or in whole, in a memory device of a computing device executing operations and applied in the operations of the computing device. The software instructions may be embedded in firmware, such as erasable programmable read-only memory (EPROM). It will be apparent that a hardware module may comprise logic elements, such as gates, flip-flops, and/or programmable elements, such as a programmable gate array or processor, connected together. The functions of the modules or computing devices described herein are preferably implemented as software modules, but may also be represented in hardware or firmware. In general, the modules described herein are logical modules, and are not limited by their specific physical form or memory. One module can be combined with other modules or separated into a series of sub-modules.
Unless otherwise defined, all terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. It will be further understood that terms, such as those defined in commonly used dictionaries, should be interpreted as having a meaning that is consistent with their meaning in the context of the relevant art and will not be interpreted in an idealized or overly formal sense unless expressly so defined herein.
The foregoing is illustrative of the present invention and is not to be construed as limiting thereof. Although a few exemplary embodiments of this invention have been described, those skilled in the art will readily appreciate that many modifications are possible in the exemplary embodiments without materially departing from the novel teachings and advantages of this invention. Accordingly, all such modifications are intended to be included within the scope of this invention as defined in the following claims. It is to be understood that the foregoing is illustrative of the present invention and is not to be construed as limited to the specific embodiments disclosed, and that modifications to the disclosed embodiments, as well as other embodiments, are intended to be included within the scope of the appended claims. The invention is defined by the claims and their equivalents.

Claims (11)

1. A video processing method, comprising:
determining a first weighting parameter for each second pixel point of a second image frame in a second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size are different;
rounding the first weighting parameters based on predefined rounding parameters, determining second weighting parameters for each second pixel point in the second image frame;
for each second image frame in the second video,
determining a pixel value of each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value of at least one first pixel point in a first image frame of the first video corresponding to the second image frame; and
outputting the second image frame according to the pixel value of each second pixel point in the second image frame, wherein the second weighting parameter comprises at least one second weighting element for each first pixel point in the at least one first pixel point, and determining the pixel value of each second pixel point in the second image frame comprises:
for each second pixel point in the second image frame,
Performing weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction to determine a second weighted average in the horizontal direction;
performing weighted average on the second weighted average value in the horizontal direction based on the second weighted element in the vertical direction to obtain a second weighted average value of at least one first pixel point for the second pixel point;
performing an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average; and
and determining the first weighted average value as the pixel value of the second pixel point.
2. The video processing method of claim 1, wherein determining the first weighting parameter for each second pixel point of the second image frame in the second video based on the first size for the first video and the second size for the second video comprises:
for each second pixel point in the second image frame,
determining a location of a mapping point corresponding to the second pixel point in a first image frame of the first video corresponding to the second image frame based on the first size and the second size;
Determining at least one first pixel point in the first image frame for determining the second pixel point according to the position of the mapping point,
determining the first weighting parameter according to the mapping point and the position of each first pixel point in at least one first pixel point for the second pixel point based on a predefined mapping relation, wherein the first weighting parameter comprises at least one first weighting element for each first pixel point in the at least one first pixel point.
3. The video processing method of claim 1, wherein the second weighting parameters include at least one second weighting element for each of the at least one first pixel points, respectively, and determining the second weighting parameters for each of the second pixel points in the second image frame based on predefined rounding parameters and the first weighting parameters includes:
for each first weighting element of the at least one first weighting element, multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
4. A video processing method as defined in claim 3, wherein the rounding parameter is a power n of 2, n is an integer greater than 1, and multiplying the first weighting element by the rounding parameter to obtain the second weighting element comprises:
The first weighting element is shifted left by n bits to obtain the second weighting element.
5. The video processing method of claim 1, further comprising:
determining a third weighting parameter for each second pixel of the scaled second image frame in the second video based on the first size and the third size in response to the second video scaling from the second size to the third size;
rounding the third weighting parameters based on predefined rounding parameters, and determining fourth weighting parameters for each second pixel point in the scaled second image frame;
for each of said scaled second image frames in said second video,
determining a pixel value of each second pixel in the scaled second image frame based on the fourth weighting parameter and a pixel value of at least one first pixel in the first video corresponding to the scaled second image frame;
outputting the scaled second image frame according to the pixel value of each second pixel point in the scaled second image frame.
6. A video processing apparatus comprising:
a first weighting parameter determination unit configured to determine a first weighting parameter for each second pixel point of a second image frame in a second video based on a first size for the first video and a second size for the second video, wherein the first size and the second size are different;
A second weighting parameter determination unit configured to perform a rounding operation on the first weighting parameter based on a predefined rounding parameter, determining a second weighting parameter for each second pixel point in the second image frame;
a second image frame determination unit configured to:
for each second image frame in the second video,
determining a pixel value of each second pixel point in the second image frame based on the second weighting parameter, the rounding parameter, and a pixel value of at least one first pixel point in a first image frame of the first video corresponding to the second image frame; and
outputting the second image frame according to the pixel value of each second pixel point in the second image frame,
wherein the second weighting parameter comprises at least one second weighting element for each of the at least one first pixel point, respectively, the second image frame determination unit being configured to:
for each second pixel point in the second image frame,
performing weighted average on the pixel values of the first pixel points in the horizontal direction based on the second weighted element in the horizontal direction to determine a second weighted average in the horizontal direction;
Performing weighted average on the second weighted average value in the horizontal direction based on the second weighted element in the vertical direction to obtain a second weighted average value of at least one first pixel point for the second pixel point;
performing an inverse rounding operation on the second weighted average based on the rounding parameter to determine a first weighted average corresponding to the second weighted average; and
and determining the first weighted average value as the pixel value of the second pixel point.
7. The video processing apparatus according to claim 6, wherein the first weighting parameter determination unit is configured to:
for each second pixel point in the second image frame,
determining a location of a mapping point corresponding to the second pixel point in a first image frame of the first video corresponding to the second image frame based on the first size and the second size;
determining at least one first pixel point in the first image frame for determining the second pixel point according to the position of the mapping point,
determining the first weighting parameter according to the mapping point and the position of each first pixel point in at least one first pixel point for the second pixel point based on a predefined mapping relation, wherein the first weighting parameter comprises at least one first weighting element for each first pixel point in the at least one first pixel point.
8. The video processing apparatus of claim 6, wherein the second weighting parameter includes at least one second weighting element for each of the at least one first pixel points, respectively, the second weighting parameter determination unit configured to:
for each first weighting element of the at least one first weighting element, multiplying the first weighting element by the rounding parameter to obtain a second weighting element, wherein the second weighting element is an integer.
9. The video processing apparatus of claim 8, wherein the rounding parameter is a power n of 2, n is an integer greater than 1, and multiplying the first weighting element by the rounding parameter to obtain a second weighting element comprises:
the first weighting element is shifted left by n bits to obtain the second weighting element.
10. A video processing apparatus comprising:
a processor; and
a memory in which computer-readable program instructions are stored,
wherein the video processing method according to any of claims 1-5 is performed when the computer readable program instructions are executed by the processor.
11. A computer readable storage medium having stored thereon computer readable instructions which, when executed by a computer, perform the video processing method of any of claims 1-5.
CN202010285036.4A 2020-04-13 2020-04-13 Video processing method, apparatus, device and computer readable medium Active CN113542808B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010285036.4A CN113542808B (en) 2020-04-13 2020-04-13 Video processing method, apparatus, device and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010285036.4A CN113542808B (en) 2020-04-13 2020-04-13 Video processing method, apparatus, device and computer readable medium

Publications (2)

Publication Number Publication Date
CN113542808A CN113542808A (en) 2021-10-22
CN113542808B true CN113542808B (en) 2023-09-29

Family

ID=78119926

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010285036.4A Active CN113542808B (en) 2020-04-13 2020-04-13 Video processing method, apparatus, device and computer readable medium

Country Status (1)

Country Link
CN (1) CN113542808B (en)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10555035B2 (en) * 2017-06-09 2020-02-04 Disney Enterprises, Inc. High-speed parallel engine for processing file-based high-resolution images
CN110223232A (en) * 2019-06-06 2019-09-10 电子科技大学 A kind of video image amplifying method based on bilinear interpolation algorithm

Also Published As

Publication number Publication date
CN113542808A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
CN112651438A (en) Multi-class image classification method and device, terminal equipment and storage medium
CN112102164B (en) Image processing method, device, terminal and storage medium
WO2018214769A1 (en) Image processing method, device and system
CN112990219B (en) Method and device for image semantic segmentation
EP2955694A2 (en) Method, apparatus and computer program product for image processing
US11721003B1 (en) Digital image dynamic range processing apparatus and method
CN111539353A (en) Image scene recognition method and device, computer equipment and storage medium
CN115022679B (en) Video processing method, device, electronic equipment and medium
CN110717405B (en) Face feature point positioning method, device, medium and electronic equipment
CN113570612B (en) Image processing method, device and equipment
CN108595211A (en) Method and apparatus for output data
CN108932703B (en) Picture processing method, picture processing device and terminal equipment
US20240037701A1 (en) Image processing and rendering
CN113542808B (en) Video processing method, apparatus, device and computer readable medium
CN115082356B (en) Method, device and equipment for correcting video stream image based on shader
CN117408886A (en) Gas image enhancement method, gas image enhancement device, electronic device and storage medium
CN110895699B (en) Method and apparatus for processing feature points of image
CN114723796A (en) Three-dimensional point cloud generation method and device and electronic equipment
CN109375952B (en) Method and apparatus for storing data
CN115454923A (en) Data calculation device, board card, method and storage medium
CN115424168A (en) Screen video quality evaluation method and device based on self-adaptive 3D convolution
US11132569B2 (en) Hardware accelerator for integral image computation
CN108109102B (en) Data processing method and device, electronic equipment and storage medium
CN117911300A (en) Method and device for processing image
CN112560708A (en) Interest area based road sign identification method and device and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant