CN115334235A - Video processing method, device, terminal equipment and storage medium - Google Patents

Video processing method, device, terminal equipment and storage medium Download PDF

Info

Publication number
CN115334235A
CN115334235A CN202210767579.9A CN202210767579A CN115334235A CN 115334235 A CN115334235 A CN 115334235A CN 202210767579 A CN202210767579 A CN 202210767579A CN 115334235 A CN115334235 A CN 115334235A
Authority
CN
China
Prior art keywords
image
parameter
images
input
combined
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210767579.9A
Other languages
Chinese (zh)
Inventor
张强强
韦桂锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xian Novastar Electronic Technology Co Ltd
Original Assignee
Xian Novastar Electronic Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xian Novastar Electronic Technology Co Ltd filed Critical Xian Novastar Electronic Technology Co Ltd
Priority to CN202210767579.9A priority Critical patent/CN115334235A/en
Publication of CN115334235A publication Critical patent/CN115334235A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Abstract

The application is applicable to the technical field of video processing, and provides a video processing method, a video processing device, terminal equipment and a storage medium. The video processing method specifically comprises the following steps: acquiring an input image from an input video stream; acquiring preset image processing parameters, wherein the image processing parameters comprise at least one of a first parameter and a second parameter; performing image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or generating the image to be combined according to the second parameter; and combining the images to be combined to obtain an output video stream. The embodiment of the application can adopt different modes to realize the frequency increasing and reducing of the video, so that the output video stream is adaptive to different scenes.

Description

Video processing method and device, terminal equipment and storage medium
Technical Field
The present application belongs to the field of video processing technologies, and in particular, to a video processing method, an apparatus, a terminal device, and a storage medium.
Background
The current video shooting comprises outdoor framing shooting, green screen shooting and indoor virtual shooting. The outdoor framing shooting difficulty is high, and in order to shoot different scenes, more waves are needed, so that the logistics cost, the manpower cost and the time cost are increased. And the post-production cost of the green screen shooting is higher. Therefore, indoor virtual photography is becoming more popular.
The indoor virtual shooting is realized by rendering scene pictures on the display screen in real time, so that the character and scene shooting are formed in one step, the processing work on the background during post production is reduced, and the post production cost is reduced. The requirements of indoor virtual shooting on display materials of scenes are high, after comprehensive comparison, the LED display screen is high in brightness and wide in color gamut, and the content of the scene picture displayed by the LED display screen is basically close to the natural scene seen by human eyes.
The LED display screen is formed by splicing one or more boxes, each box can be flexibly spliced to form a large-scale LED display screen, and complete background wall display and 360-degree dead-angle-free shooting can be realized during indoor virtual shooting in application. The method can support the multiple cameras to shoot the same picture from different angles at the same time, and can also support the multiple different cameras to shoot different pictures at the same time.
Video shot by different cameras or scene pictures displayed on an LED display screen are often obtained through video processing. In the related video processing technology, the frame rate and the content of the output video stream are both related to the input video stream, and thus, the video processing technology is difficult to adapt to different application scenes.
Disclosure of Invention
The embodiment of the application provides a video processing method, a video processing device, a terminal device and a storage medium, which can solve the problem of low scene universality of the existing video processing mode.
A first aspect of an embodiment of the present application provides a video processing method, including:
acquiring an input image from an input video stream;
acquiring preset image processing parameters, wherein the image processing parameters comprise at least one of a first parameter and a second parameter;
performing image processing on one or more frames of images in the input images according to the first parameters to obtain images to be combined; and/or generating the image to be combined according to the second parameter;
and combining the images to be combined to obtain an output video stream.
A video processing apparatus provided in a second aspect of an embodiment of the present application includes:
an image acquisition unit for acquiring an input image from an input video stream;
the image processing system comprises a parameter acquisition unit, a parameter processing unit and a processing unit, wherein the parameter acquisition unit is used for acquiring preset image processing parameters which comprise at least one of a first parameter and a second parameter;
the image processing unit is used for carrying out image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or generating the image to be combined according to the second parameter;
and the video processing unit is used for combining the images to be combined to obtain an output video stream.
A third aspect of the embodiments of the present application provides a terminal device, including a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor implements the steps of the video processing method when executing the computer program.
A fourth aspect of the embodiments of the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the steps of the video processing method.
A fifth aspect of embodiments of the present application provides a computer program product, which, when run on a terminal device, causes the terminal device to execute the video processing method described in the first aspect.
In the embodiment of the application, an input image is obtained from an input video stream, a preset image processing parameter is obtained, the image processing parameter includes at least one of a first parameter and a second parameter, then, one or more frames of images in the input image are subjected to image processing according to the first parameter to obtain an image to be combined, and/or an image to be combined is generated according to the second parameter to combine the images to be combined to obtain an output video stream, so that a terminal device can generate an image without an input video stream, and can also obtain a frame of image in the output video stream by performing image processing on one frame of input image in the input video stream to realize the generation of the video stream and the video content, and meanwhile, the frame frequency and the content of the output video stream are not required to be determined by the input video stream, and the output video stream obtained by the processing can be adapted to different application scenes.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart illustrating an implementation of a video processing method according to an embodiment of the present application;
fig. 2 is a schematic diagram of a video processing flow provided by an embodiment of the present application;
fig. 3 is a schematic diagram of a control interface of a controller of a terminal device according to an embodiment of the present application;
fig. 4 is a schematic diagram of a first video processing manner provided in the embodiment of the present application;
FIG. 5 is a schematic view of a control interface of the controller corresponding to the first video processing mode shown in FIG. 4;
fig. 6 is a schematic diagram of a second video processing manner provided in the embodiment of the present application;
FIG. 7 is a schematic view of a control interface of the controller corresponding to the second video processing mode shown in FIG. 6;
fig. 8 is a schematic diagram of a third video processing manner according to an embodiment of the present application;
FIG. 9 is a schematic diagram of a control interface of a controller corresponding to the third video processing method shown in FIG. 8;
fig. 10 is a schematic diagram of a fourth video processing manner provided in the embodiment of the present application;
FIG. 11 is a schematic view of a control interface of the controller corresponding to the fourth video processing mode shown in FIG. 10;
FIG. 12 is a schematic diagram of a software control interface provided by an embodiment of the present application;
fig. 13 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application;
fig. 14 is a schematic structural diagram of a terminal device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the present application more clearly understood, the present application is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the present application and are not intended to limit the present application. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any inventive step, are intended to be protected by the present application.
In order to explain the technical means of the present application, the following description will be given by way of specific examples.
Fig. 1 shows a schematic implementation flow diagram of a video processing method provided in an embodiment of the present application, where the method can be applied to a terminal device and is applicable to a situation where the adaptability of an output video stream needs to be improved.
The terminal device may be an intelligent device such as a computer or a mobile phone, or a dedicated device for processing video.
Specifically, the video processing method may include the following steps S101 to S104.
In step S101, an input image is acquired from an input video stream.
The input video stream refers to video data received by the terminal device. The terminal device may perform data analysis after acquiring the input video stream, and capture one or more frames of images in the input video stream as input images.
In some embodiments of the present application, the terminal device may extract at least one frame image from one or more input video streams, respectively, as an input image. Each frame of image in the input video stream may be an image in an initial video stream transmitted by one source device, or may be a composite image obtained by splicing images in initial video streams transmitted by a plurality of source devices.
Specifically, the terminal device may include a plurality of types of interfaces, and each interface may be connected to a different input device to receive an input video stream input by the different input device through the interface. The types of the interfaces for connecting different input devices and the terminal device may be the same or different, and may be, for example, a High Definition Multimedia Interface (HDMI), a digital video Interface standard (DP), a digital component serial Interface (SDI), and the like, which is not limited in this application. The input device may include, but is not limited to, a computer, a mobile phone, a set-top box, and the like, which is not limited in this application.
Fig. 2 shows a schematic diagram of a video processing procedure according to the present application.
Please refer to fig. 2. In some implementations, the input device can be a source device. The source device can transmit an initial video stream of the source device to the terminal device, wherein the initial video stream contains the image A 0 After the terminal device receives the initial video stream transmitted by the source device, the received initial video stream can be used as an input video stream. In other embodiments, the input device may be a pre-stage device. The front-stage device is an intermediate device for processing the initial video stream transmitted by the source device, and can process the images B respectively contained in the initial video streams transmitted by the plurality of source devices 0 、C 0 、D 0 、E 0 And splicing to obtain a composite image, and forming a processed video stream by the composite image so as to transmit the processed video stream to the terminal equipment. The terminal device receives the processed video stream and then can use the processed video stream as an input video stream.
Accordingly, the input image acquired by the terminal device may be an image (e.g., image a) in the initial video stream transmitted by the source device 0 ) And/or composite images (e.g. image B) in the processed video stream transmitted by the preceding stage apparatus 0 、C 0 、D 0 、E 0 Composite images obtained after stitching). Note that image A is 0 May be image B 0 、C 0 、D 0 、E 0 May be different from the image B 0 、C 0 、D 0 、E 0 The present application is not limited to the other images in (1).
In practical application, a user can access one or more input devices on the terminal device according to actual needs, so that the terminal device can simultaneously support video processing of single-source or multi-source input video streams.
Step S102, acquiring preset image processing parameters.
Wherein the image processing parameter may include at least one of a first parameter and a second parameter. The first parameter and the second parameter are two different parameters, wherein the first parameter is used for processing the input image to obtain an image to be combined, and the second parameter is used for directly generating the image to be combined.
Step S103, performing image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or generating an image to be combined according to the second parameter.
In the embodiment of the application, the terminal device may generate the image to be combined in the output video stream through different processing modes. In some embodiments, the image to be combined may be an image obtained by performing image processing on one or more frames of images in the input image according to the first parameter, and in other embodiments, the image to be combined may be an image directly generated according to the second parameter.
Please refer to fig. 2 and fig. 3. The controller of the terminal device may generate a control flow to process the input image stored in the memory. The Memory may be Synchronous Dynamic Random Access Memory (SDRAM), double Data Random Access Memory (DDR SDRAM), or other Memory.
Fig. 3 shows a schematic view of a control interface of a controller of a terminal device, in which a user can input a first parameter and/or a second parameter to generate a corresponding control flow for image processing. If the user inputs the first parameter, for example, the first parameter may be a pixel coordinate range, an image gain parameter, or the like, the terminal device may perform image processing on one or more frames of images stored in the input image in the memory according to the first parameter, so as to obtain an image to be combined. If the user inputs the second parameter, for example, the second parameter may be a target color, a replacement image, or the like, the terminal device may generate an image to be combined according to the second parameter.
In some embodiments of the present application, the terminal device may obtain a target frequency doubling parameter, where the target frequency doubling parameter refers to a ratio between a frame frequency of the output video stream and a frame frequency of the input video stream. Specifically, the terminal device may obtain a target frequency doubling parameter input by the user on the control interface shown in fig. 3, and may also calculate the target frequency doubling parameter according to the frame frequency of the input video stream and the frame frequency of the output video stream, which is not limited in this application.
The terminal device may divide different frame images of the input video stream into different groups of images. Also, images of the same frame of different input video streams may be divided into the same group of images, where each group of images contains one frame of input image respectively acquired from different input video streams. That is, the terminal device may group the input images of different input video streams in the order of the video frames. For example, an input video stream V 1 And an input video stream V 2 The first frame image in (b) is divided into a first image group, and a video stream V is input 1 And an input video stream V 2 The second frame image in (b) is divided into a second group of images, and the input video stream V 1 And an input video stream V 2 The third frame image in (b) is divided into a third image group, and so on.
Then, according to the target frequency doubling parameter, the terminal device may obtain a second number of image processing parameters used by the first number of image groups, where a ratio of the second number to the first number is the target frequency doubling parameter.
For example, if the frame frequency of the input video stream is 60Hz, when the user inputs 4 times frequency, the frame frequency of the output video stream should be 240Hz, that is, one image group will be processed to obtain four frames of images to be combined. At this time, the terminal device may obtain an image processing parameter used by each of the four images to be combined, which are input by the user in the control interface shown in fig. 3, to obtain four image processing parameters. The image processing parameters used for each frame of image to be combined may be the same or different.
After the image processing parameters are obtained, the terminal device may perform image processing on one or more frames of images in the input image according to the first parameters in the second number of image processing parameters to obtain an image to be combined, and/or pre-generate an image to be combined according to the second parameters in the second number of image processing parameters. And processing each image group in sequence until all the image groups are traversed.
For example, when the target frame frequency is 4 times, for the first image group, the terminal device may perform image processing by using four image processing parameters to obtain first to fourth frames of images to be combined; then, for the second image group, the terminal device may continue to perform image processing by using the four image processing parameters to obtain images to be combined from the fifth frame to the eighth frame, and so on until all image groups are processed, so as to obtain each frame of images to be combined in the output video stream. Therefore, the same parameter group can be used for processing different image groups, and a user does not need to configure the image processing parameters of each image group one by one, so that the video processing efficiency can be improved.
It should be understood that in other embodiments, the target frequency doubling parameter may also be 0.5 frequency doubling or other frequency doubling less than 1, and at this time, the plurality of image groups are processed to obtain a frame of image to be combined, and the image groups are combined into the output video stream to implement frequency reduction. For example, if the user inputs 0.5 octave, the two image groups will be processed to obtain a frame of image to be combined.
It should be noted that, the above is only a description of a control method of the present application, and in other embodiments, the target frequency doubling parameter and the image processing parameter may also be generated by the terminal device according to the need of the output video stream, or may also be obtained by performing data interaction with other intelligent terminals, which is not limited in this application.
In addition, since the number of the input video streams may be multiple, when the terminal device acquires the first parameter, the terminal device may further acquire an association relationship between the first parameter and the input video streams. For example, when the user inputs the first parameter, the user may input an input video stream from which the image processed by the first parameter comes, and at this time, an association relationship is formed between the first parameter and the input video stream.
Correspondingly, the terminal device may perform image processing on one or more frames of images in the input images acquired from the target input video stream according to the first parameter to obtain an image to be combined, where the target input video stream is the input video stream associated with the first parameter. That is, the first parameter is used to process images in the associated input video stream.
For example, the first parameter and the input video stream V 1 The terminal device can then associate the secondary input video stream V according to the first parameter 1 Is acquired from the input image V 0 And carrying out image processing to obtain an image to be combined.
And step S104, combining the images to be combined to obtain an output video stream.
In the embodiment of the application, after the images to be combined are obtained, the terminal device may combine the images to be combined to obtain the output video stream. The resulting output video stream may be used for display on a display screen of the terminal device, or may be transmitted by the terminal device to other devices for further video processing by the other devices or for display on a display screen of the other devices.
Specifically, in some embodiments, the terminal device may obtain an identifier (e.g., a timestamp) corresponding to each frame of images to be combined, and combine the images to be combined according to an order of the identifiers, to obtain the output video stream. In other embodiments, the terminal device may further perform combination according to the similarity of the image content of each frame of image to be combined, or according to other classification and sorting manners, which is not limited in this application.
As shown in fig. 2 and 3, the terminal device processes the input video stream (frame rate 60 Hz) to obtain the output video stream (frame rate 240 Hz), and the output video streams can respectively display the serial numbers 1 (T) 1 Time of day) of the images a to be combined 1 Number 2 (T) 2 Time of day) of the image B to be combined 1 Number 3 (T) 3 Time of day) of the images C to be combined 1 Number 4 (T) 4 Time of day) of the images D to be combined 1
In the embodiment of the application, an input image is obtained from an input video stream, a preset image processing parameter is obtained, the image processing parameter includes at least one of a first parameter and a second parameter, then, one or more frames of images in the input image are subjected to image processing according to the first parameter to obtain an image to be combined, and/or an image to be combined is generated according to the second parameter to combine the images to be combined to obtain an output video stream, so that a terminal device can generate an image without an input video stream, and can also obtain a frame of image in the output video stream by performing image processing on one frame of input image in the input video stream to realize the generation of the video stream and the video content, and meanwhile, the frame frequency and the content of the output video stream are not required to be determined by the input video stream, and the output video stream obtained by the processing can be adapted to different application scenes.
In order to meet the video processing requirements of different use scenes, the terminal equipment provided by the application can execute video processing in different modes.
Referring to fig. 4 and fig. 5, fig. 4 shows a schematic diagram of a first video processing manner provided by the present application, and fig. 5 shows a schematic diagram of a control interface corresponding to a controller.
In this embodiment, the first parameter may include a pixel coordinate range. The terminal device can intercept an image area in the pixel point coordinate range in the input image as an image to be combined according to the pixel point coordinate range.
Specifically, the image A to be combined is shown as the 1 st frame in FIG. 4 and FIG. 5 1 And the 3 rd frame image C to be combined 1 The terminal equipment can acquire a coordinate initial position input by a user, determine a coordinate end point position according to the resolution requirement or the size requirement of the image to be combined, obtain a pixel point coordinate range, and further intercept an image area in the pixel point coordinate range in the input image as the image to be combined. In other embodiments, the terminal device may also directly obtain the coordinate start position and the coordinate end position input by the user, so as to obtain the pixel point coordinate range.
In this manner, for an input image, the terminal device can intercept an image area of interest of the user as an image to be combined. For example, as shown in FIG. 4, if the input image is image A 0 、B 0 、C 0 、D 0 The terminal equipment can also intercept the image A in the composite image 0 、B 0 、C 0 、D 0 As an image to be combined.
In other embodiments, the second parameter may include a target color. At this time, the image B to be combined is as shown in frame 2 of FIG. 5 (i.e., image B to be combined in FIG. 4) 1 ) And frame 4 (i.e., image D to be combined in FIG. 4) 1 ) The terminal device may generate a solid image corresponding to the target color, and use the solid image as an image to be combined.
In practical applications, the target color may be set to green or white. Applying the output video stream to virtual photography, if the user finds that the image A to be combined is displayed on the LED display screen 1 When the image is taken as the shooting background, the imaging effect of the obtained image is poor, the image can be abandoned, and an LED display screen is utilized to display the image B to be combined 1 And as the imaging obtained when the background is shot, the later background processing is carried out to replace the imaging with poor effect. Thus, the user does not need to treat the combined image A 1 The shooting is resumed as the background of the shooting.
Referring to fig. 6 and fig. 7, fig. 6 shows a schematic diagram of a second video processing manner provided in the present application, and fig. 7 shows a schematic diagram of a control interface corresponding to a controller.
In this embodiment, the terminal device may acquire a target input video source input by a user and a related first parameter, and further perform image processing on one or more frames of images in an input image acquired from a target input video stream according to the first parameter to obtain an image to be combined.
Therefore, each frame of image to be combined can be sourced from different input video streams, and different frames of images to be combined can also be sourced from the same input video source, namely, the video source of each frame of image to be combined is selectable.
It should be noted that, in this way, the first parameter may be a pixel coordinate range (for example, the image a to be combined in the 1 st frame of fig. 6 and 7) 1 ) For a specific implementation, reference may be made to the description of the first video processing method, which is not described in detail herein.
Referring to fig. 8 and 9, fig. 8 is a schematic diagram illustrating a third video processing method provided in the present application, and fig. 9 is a schematic diagram illustrating a control interface corresponding to a controller.
In this embodiment, the first parameter may include a pixel point coordinate range. The terminal equipment can acquire a preset first target window comprising a plurality of processing windows, then respectively set one or more frames of images in the input image into one processing window, and further take an image area positioned in a pixel point coordinate range in the first target window as an image to be combined.
For example, as shown in fig. 8, the terminal device may generate 4 processing windows, and the first target window is composed of the 4 processing windows. Then, a frame of input image A obtained from 4 input video streams is respectively 0 、B 0 、C 0 And D 0 Respectively arranged in one processing window. The size of each processing window can be adjusted according to the requirements of users. When setting an image, the input image may be cropped or zoomed to fit the size of the processing window.
At this time, the 1 st frame to-be-combined image A as in FIG. 8 and FIG. 9 1 And the 3 rd frame image C to be combined 1 Shown in (a)And the terminal equipment can take the image area in the first target window, which is positioned in the corresponding pixel point coordinate range, as the image to be combined.
In this way, the terminal device can stitch the input images extracted from the different input video streams within the first target window. Meanwhile, a visual window interface can be provided for a user for reference, the size and the content of the window can be determined by the user according to actual requirements, and the method is suitable for a video processor.
It should be noted that, this method also supports setting and selecting the input video stream, and the specific implementation manner may refer to the description of the foregoing second video processing method, which is not described herein again.
Referring to fig. 10 and fig. 11, fig. 10 shows a schematic diagram of a fourth video process provided in the present application, and fig. 11 shows a schematic diagram of a control interface corresponding to a controller.
In this embodiment, the first parameter may include a pixel point coordinate range. The terminal device may obtain a plurality of preset second target windows, set one or more frames of images in the input image in one second target window, then set the priority of each second target window to the highest priority in sequence, and take the image area located within the pixel point coordinate range in the second target window with the current priority as the highest priority as the image to be combined.
It should be noted that the priority may be updated once every preset interval duration, and during updating, the priority of each second target window may be set to be the highest priority in a cycle. For example, there are three second target windows K 1 、K 2 And K 3 . Initially setting a second target window K 1 Setting a second target window K for the highest priority after the interval duration 2 Setting a second target window K for the highest priority after the interval duration 3 Setting the second target window K again after the interval duration for the highest priority 1 Highest priority, and so on.
In this way, cyclic updating can be implemented in the second target window, and the image content and the order of the images to be combined in the output video stream are related to the priority.
In some embodiments of the present application, the terminal device may further obtain a target frame rate, and determine an interval duration for updating the priority according to the target frame rate. The target frame rate is the frame rate of the output video stream. The terminal device may update the priority of each second target window each time the interval duration elapses. Therefore, the interval duration of the updated priority in the second target window is consistent with the interval duration of the two adjacent frames of images to be combined, and the situation that a plurality of second target windows are arranged between the second target windows used by the two adjacent frames of images to be combined due to the fact that the priority of the second target window is updated too fast, and image content arranged in the plurality of second target windows is lost due to the fact that the image content is not used is avoided.
In order to provide a more convenient operation mode for a user, please refer to fig. 12, where fig. 12 shows a schematic diagram of a software control interface provided in the present application.
In this embodiment, the image processing parameter may be an identification of an image processing policy. The terminal equipment can acquire the identifier of the image processing strategy, if the identifier belongs to the first parameter, one or more frames of images in the input images are processed according to the image processing strategy corresponding to the identifier, and the processed images are used as images to be combined. And if the identifier belongs to the second parameter, generating an image to be combined according to the image processing strategy corresponding to the identifier.
The image processing strategy may include at least one of video source selection, color filling, image inversion, image gain, phase adjustment, and color replacement.
The video source selection is the selection of the input video stream associated with the first association as previously described. Color filling refers to generating an image to be combined according to a target color or a selected alternative image. Image flipping refers to inverting the image content of an input image. The image gain refers to a driving current gain to adjust the brightness of the input image. The phase adjustment refers to adjusting the phase time of the black field signal in the output video source.
It should be understood that the user operating in the interface shown in fig. 12 may enable the terminal device to process the same input image based on one or more of the aforementioned manners, and implement different video processing functions, such as image flipping and image gain on one input image at the same time.
Based on the above description, the video processing method provided by the application is not only suitable for application scenes with different input video sources, such as single source and multi-source, but also suitable for application scenes requiring multiple video processing, and has high scene adaptability.
It should be noted that, for simplicity of description, the foregoing method embodiments are described as a series of acts or combination of acts, but those skilled in the art will recognize that the present application is not limited by the order of acts, as some steps may, in accordance with the present application, occur in other orders.
Fig. 13 is a schematic structural diagram of a video processing apparatus 1300 according to an embodiment of the present disclosure, where the video processing apparatus 1300 is configured on a terminal device.
Specifically, the video processing apparatus 1300 may include:
an image acquisition unit 1301 for acquiring an input image from an input video stream;
a parameter obtaining unit 1302, configured to obtain preset image processing parameters, where the image processing parameters include at least one of a first parameter and a second parameter;
the image processing unit 1303 is configured to perform image processing on one or more frames of images in the input image according to the first pre-parameter to obtain an image to be combined; and/or generating the image to be combined according to the second parameter;
and the video processing unit 1304 is configured to combine the images to be combined to obtain an output video stream.
In some embodiments of the present application, the image obtaining unit 1301 may be specifically configured to: respectively extracting at least one frame of image from one or more input video streams as the input images, wherein each frame of image in the input video streams is an image in an initial video stream transmitted by one source device, or is a composite image obtained by splicing images in the initial video streams transmitted by a plurality of source devices; the parameter obtaining unit 1302 may be specifically configured to: obtaining first parameters, each of the first parameters being associated with one of the input video streams; the image processing unit 1303 may be specifically configured to: and according to the first parameter, performing image processing on one or more frames of images in the input images acquired from a target input video stream to obtain the image to be combined, wherein the target input video stream is an input video stream associated with the first parameter.
Specifically, the terminal device may include a circuit board supporting multiple interfaces to perform data transmission through different types of interfaces. Each interface may interface with a different input device to receive an input video stream input by the different input device through the interface. The types of the interfaces for connecting different input devices and terminal devices may be the same or different, for example, the interfaces may be HDMI, DP, SDI, and the like, which is not limited in this application. The number of each type of interface may be one or more. The user may access one or more input devices according to actual needs. When a user inputs a first parameter associated with a certain input video stream, the terminal device can analyze the signal received by the corresponding interface to obtain the input video stream, and process one or more frames of images in the analyzed input video stream by using the first parameter to obtain an image to be combined, thereby realizing the video processing of single-source input or multi-source input.
In some embodiments of the present application, the parameter obtaining unit 1302 may be specifically configured to: acquiring a target frequency doubling parameter, wherein the target frequency doubling parameter is a ratio between the frame frequency of the output video stream and the frame frequency of the input video stream; acquiring a second number of image processing parameters used by a first number of image groups according to the target frequency doubling parameter, wherein the ratio of the second number to the first number is the target frequency doubling parameter, and each image group contains a frame of input image acquired from each input video stream; the image processing unit 1303 may be specifically configured to: and according to the first parameters in the second number of image processing parameters, carrying out image processing on the input images in the first number of image groups to obtain the images to be combined, and/or generating the images to be combined according to the second parameters to obtain the second number of images to be combined.
In practical application, a user can access an input device capable of transmitting an input video stream with a specific frame frequency in an interface of the circuit board according to requirements. Based on the frame frequency of the output video stream required by the user and the frame frequency of the input video stream accessed, the user can input the target frequency multiplication parameter on the terminal device, so that the frame frequency of the output video stream obtained by completing the video processing meets the actual requirement of the user.
In some embodiments of the present application, the video processing apparatus 1300 further includes a window processing unit configured to: acquiring a preset first target window, wherein the first target window comprises a plurality of processing windows; respectively arranging one or more frames of images in the input images in one processing window; the first parameter may include a pixel coordinate range in the first target window; the image processing unit 1303 may be specifically configured to: and taking the image area positioned in the pixel point coordinate range in the first target window as the image to be combined.
In some embodiments of the present application, the window processing unit may be specifically configured to: acquiring a plurality of preset second target windows; respectively arranging one or more frames of images in the input images in one second target window; setting the priority of each second target window as the highest priority in sequence; the first parameter may include a pixel coordinate range in the second target window; the image processing unit 1303 may be specifically configured to: and taking the image area positioned in the pixel point coordinate range in the second target window with the current highest priority as the image to be combined.
In some embodiments, the terminal device may be configured with an output device such as a display screen. The display screen may display an interface for a user to input image processing parameters, such as the interface shown in fig. 3. Meanwhile, windows such as a first target window, a second target window and the like can be displayed, at the moment, a user can visually view image content in each window, the user can conveniently select an image area in a specific pixel point coordinate range as an image to be combined, and visual input and selection operation is achieved.
In some embodiments of the present application, the image processing parameter is an identifier of an image processing policy; the image processing unit 1303 may be specifically configured to: processing one or more frames of images in the input image according to the image processing strategy corresponding to the identifier, and taking the processed images as the images to be combined; and generating the image to be combined according to the image processing strategy corresponding to the identifier.
In some embodiments of the present application, the second parameter includes a target color; the image processing unit 1303 may be specifically configured to: and generating a pure color image corresponding to the target color, and taking the pure color image as the image to be combined.
It should be noted that, for convenience and simplicity of description, the specific working process of the video processing apparatus 1300 may refer to the corresponding process of the video processing method described in fig. 1 to fig. 12, and is not repeated herein.
Fig. 14 is a schematic diagram of a terminal device according to an embodiment of the present application. The terminal device 14 may include: a processor 140, a memory 141, and a computer program 142, such as a video processing program, stored in the memory 141 and executable on the processor 140. The processor 140 implements the steps in the above-described embodiments of the video processing method, such as the steps S101 to S104 shown in fig. 1, when executing the computer program 142. Alternatively, the processor 140 implements the functions of the modules/units in the above-described device embodiments when executing the computer program 142, such as the image acquisition unit 1301, the parameter acquisition unit 1302, the image processing unit 1303, and the video processing unit 1304 shown in fig. 13.
The computer program may be divided into one or more modules/units, which are stored in the memory 141 and executed by the processor 140 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing certain functions, which are used to describe the execution of the computer program in the terminal device.
For example, the computer program may be divided into: the device comprises an image acquisition unit, a parameter acquisition unit, an image processing unit and a video processing unit.
The specific functions of each unit are as follows: an image acquisition unit for acquiring an input image from an input video stream; the image processing system comprises a parameter acquisition unit, a parameter processing unit and a parameter processing unit, wherein the parameter acquisition unit is used for acquiring preset image processing parameters which comprise at least one of a first parameter and a second parameter; the image processing unit is used for carrying out image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or generating the image to be combined according to the second parameter; and the video processing unit is used for combining the images to be combined to obtain an output video stream.
The terminal device may include, but is not limited to, a processor 140, a memory 141. Those skilled in the art will appreciate that fig. 14 is merely an example of a terminal device and is not limiting and may include more or fewer components than shown, or some components may be combined, or different components, for example, the terminal device may also include input output devices, network access devices, buses, etc.
The Processor 140 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The storage 141 may be an internal storage unit of the terminal device, such as a hard disk or a memory of the terminal device. The memory 141 may also be an external storage device of the terminal device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device. Further, the memory 141 may also include both an internal storage unit and an external storage device of the terminal device. The memory 141 is used for storing the computer programs and other programs and data required by the terminal device. The memory 141 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for convenience and simplicity of description, the structure of the terminal device may also refer to the specific description of the structure in the method embodiment, and details are not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only used for distinguishing one functional unit from another, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/terminal device and method may be implemented in other ways. For example, the above-described embodiments of the apparatus/terminal device are merely illustrative, and for example, the division of the modules or units is only one type of logical function division, and other division manners may be available in actual implementation, for example, multiple units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one position, or may be distributed on multiple network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present application may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit. The integrated unit can be realized in a form of hardware, and can also be realized in a form of a software functional unit.
The integrated modules/units, if implemented in the form of software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, all or part of the flow in the method of the embodiments described above can be realized by a computer program, which can be stored in a computer-readable storage medium and can realize the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer-readable medium may include: any entity or device capable of carrying the computer program code, recording medium, usb disk, removable hard disk, magnetic disk, optical disk, computer Memory, read-Only Memory (ROM), random Access Memory (RAM), electrical carrier wave signals, telecommunications signals, software distribution medium, and the like. It should be noted that the computer readable medium may contain content that is subject to appropriate increase or decrease as required by legislation and patent practice in jurisdictions, for example, in some jurisdictions, computer readable media does not include electrical carrier signals and telecommunications signals as is required by legislation and patent practice.
The above-mentioned embodiments are only used to illustrate the technical solutions of the present application, and not to limit the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A video processing method, comprising:
acquiring an input image from an input video stream;
acquiring preset image processing parameters, wherein the image processing parameters comprise at least one of a first parameter and a second parameter, and the first parameter and the second parameter are different;
performing image processing on one or more frames of images in the input images according to the first parameters to obtain images to be combined; and/or generating the image to be combined according to the second parameter;
and combining the images to be combined to obtain an output video stream.
2. The video processing method of claim 1, wherein said obtaining an input image from an input video stream comprises:
respectively extracting at least one frame of image from one or more input video streams as the input images, wherein each frame of image in the input video streams is an image in an initial video stream transmitted by one source device, or is a composite image obtained by splicing images in the initial video streams transmitted by a plurality of source devices;
the acquiring of the preset image processing parameters includes:
obtaining first parameters, each of the first parameters being associated with one of the input video streams;
the image processing one or more frames of images in the input image according to the first parameter to obtain an image to be combined includes:
and according to the first parameter, performing image processing on one or more frames of images in the input images acquired from a target input video stream to obtain the image to be combined, wherein the target input video stream is an input video stream associated with the first parameter.
3. The video processing method of claim 1, wherein the obtaining the preset image processing parameters comprises:
acquiring a target frequency doubling parameter, wherein the target frequency doubling parameter is a ratio between the frame frequency of the output video stream and the frame frequency of the input video stream;
acquiring a second number of image processing parameters used by a first number of image groups according to the target frequency doubling parameter, wherein the ratio of the second number to the first number is the target frequency doubling parameter, and each image group contains a frame of input image acquired from each input video stream;
performing image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or, generating the image to be combined according to the second parameter, including:
and according to the first parameters in the second number of image processing parameters, carrying out image processing on the input images in the first number of image groups to obtain the images to be combined, and/or generating the images to be combined according to the second parameters to obtain the second number of images to be combined.
4. The video processing method according to any of claims 1 to 3, wherein the video processing method further comprises:
acquiring a preset first target window, wherein the first target window comprises a plurality of processing windows;
respectively arranging one or more frames of images in the input images in one processing window;
the first parameter includes a pixel point coordinate range in the first target window, and the image processing is performed on one or more frames of images in the input image according to the first parameter to obtain the image to be combined, including:
and taking the image area positioned in the pixel point coordinate range in the first target window as the image to be combined.
5. The video processing method according to any of claims 1 to 3, wherein the video processing method further comprises:
acquiring a plurality of preset second target windows;
respectively setting one or more frames of images in the input images in one second target window;
setting the priority of each second target window as the highest priority in sequence;
the first parameter includes a pixel point coordinate range in the second target window, and the image processing is performed on one or more frames of images in the input image according to the first parameter to obtain the image to be combined, including:
and taking the image area positioned in the pixel point coordinate range in the second target window with the current highest priority as the image to be combined.
6. The video processing method according to any of claims 1 to 3, wherein the image processing parameter is an identification of an image processing policy;
the image processing of one or more frames of images in the input image according to the first parameter to obtain the image to be combined includes:
processing one or more frames of images in the input image according to the image processing strategy corresponding to the identifier, and taking the processed images as the images to be combined;
the generating the image to be combined according to the second parameter includes:
and generating the image to be combined according to the image processing strategy corresponding to the identifier.
7. The video processing method according to any of claims 1 to 3, wherein the second parameter comprises a target color;
the generating the image to be combined according to the second parameter includes:
and generating a pure color image corresponding to the target color, and taking the pure color image as the image to be combined.
8. A video processing apparatus, comprising:
an image acquisition unit for acquiring an input image from an input video stream;
the image processing system comprises a parameter acquisition unit, a parameter processing unit and a processing unit, wherein the parameter acquisition unit is used for acquiring preset image processing parameters which comprise at least one of a first parameter and a second parameter;
the image processing unit is used for carrying out image processing on one or more frames of images in the input image according to the first parameter to obtain an image to be combined; and/or generating the image to be combined according to the second parameter;
and the video processing unit is used for combining the images to be combined to obtain an output video stream.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor realizes the steps of the video processing method according to any one of claims 1 to 7 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the steps of the video processing method according to any one of claims 1 to 7.
CN202210767579.9A 2022-07-01 2022-07-01 Video processing method, device, terminal equipment and storage medium Pending CN115334235A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210767579.9A CN115334235A (en) 2022-07-01 2022-07-01 Video processing method, device, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210767579.9A CN115334235A (en) 2022-07-01 2022-07-01 Video processing method, device, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN115334235A true CN115334235A (en) 2022-11-11

Family

ID=83918098

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210767579.9A Pending CN115334235A (en) 2022-07-01 2022-07-01 Video processing method, device, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN115334235A (en)

Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745687A (en) * 2013-12-04 2014-04-23 西安诺瓦电子科技有限公司 Image display method and image display device
CN104883514A (en) * 2015-05-11 2015-09-02 北京金山安全软件有限公司 Video processing method and device
CN108111749A (en) * 2017-12-06 2018-06-01 广东欧珀移动通信有限公司 Image processing method and device
CN108124109A (en) * 2017-11-22 2018-06-05 上海掌门科技有限公司 A kind of method for processing video frequency, equipment and computer readable storage medium
WO2018143632A1 (en) * 2017-02-03 2018-08-09 Samsung Electronics Co., Ltd. Sensor for capturing image and method for controlling the same
CN108881927A (en) * 2017-11-30 2018-11-23 北京视联动力国际信息技术有限公司 A kind of video data synthetic method and device
CN110521212A (en) * 2017-04-27 2019-11-29 西安诺瓦星云科技股份有限公司 Video record and processing system and method
CN110581957A (en) * 2019-08-26 2019-12-17 Oppo广东移动通信有限公司 image processing method, image processing device, storage medium and electronic equipment
CN110913271A (en) * 2019-11-29 2020-03-24 Oppo广东移动通信有限公司 Video processing method, mobile terminal and non-volatile computer-readable storage medium
CN111147768A (en) * 2019-12-25 2020-05-12 北京恒峰致远科技有限公司 Intelligent monitoring video review method for improving review efficiency
WO2020103786A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Method for generating multiple video streams and device
CN111614911A (en) * 2020-05-08 2020-09-01 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and storage medium
CN112399095A (en) * 2019-08-15 2021-02-23 西安诺瓦星云科技股份有限公司 Video processing method, device and system
CN114302174A (en) * 2021-12-31 2022-04-08 上海爱奇艺新媒体科技有限公司 Video editing method and device, computing equipment and storage medium
KR20220090158A (en) * 2020-12-22 2022-06-29 삼성전자주식회사 Electronic device for editing video using objects of interest and operating method thereof

Patent Citations (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103745687A (en) * 2013-12-04 2014-04-23 西安诺瓦电子科技有限公司 Image display method and image display device
CN104883514A (en) * 2015-05-11 2015-09-02 北京金山安全软件有限公司 Video processing method and device
WO2018143632A1 (en) * 2017-02-03 2018-08-09 Samsung Electronics Co., Ltd. Sensor for capturing image and method for controlling the same
CN110521212A (en) * 2017-04-27 2019-11-29 西安诺瓦星云科技股份有限公司 Video record and processing system and method
CN108124109A (en) * 2017-11-22 2018-06-05 上海掌门科技有限公司 A kind of method for processing video frequency, equipment and computer readable storage medium
CN108881927A (en) * 2017-11-30 2018-11-23 北京视联动力国际信息技术有限公司 A kind of video data synthetic method and device
CN108111749A (en) * 2017-12-06 2018-06-01 广东欧珀移动通信有限公司 Image processing method and device
WO2020103786A1 (en) * 2018-11-23 2020-05-28 华为技术有限公司 Method for generating multiple video streams and device
CN112399095A (en) * 2019-08-15 2021-02-23 西安诺瓦星云科技股份有限公司 Video processing method, device and system
CN110581957A (en) * 2019-08-26 2019-12-17 Oppo广东移动通信有限公司 image processing method, image processing device, storage medium and electronic equipment
CN110913271A (en) * 2019-11-29 2020-03-24 Oppo广东移动通信有限公司 Video processing method, mobile terminal and non-volatile computer-readable storage medium
CN111147768A (en) * 2019-12-25 2020-05-12 北京恒峰致远科技有限公司 Intelligent monitoring video review method for improving review efficiency
CN111614911A (en) * 2020-05-08 2020-09-01 Oppo广东移动通信有限公司 Image generation method and device, electronic equipment and storage medium
KR20220090158A (en) * 2020-12-22 2022-06-29 삼성전자주식회사 Electronic device for editing video using objects of interest and operating method thereof
CN114302174A (en) * 2021-12-31 2022-04-08 上海爱奇艺新媒体科技有限公司 Video editing method and device, computing equipment and storage medium

Similar Documents

Publication Publication Date Title
US9578224B2 (en) System and method for enhanced monoimaging
US10021302B2 (en) Video recording method and device
US11450044B2 (en) Creating and displaying multi-layered augemented reality
JP7359521B2 (en) Image processing method and device
CN105959620A (en) Panorama video synchronization display method and panorama video synchronization display device
CN104978929A (en) Point-by-point correcting method of LED display device, system architecture for correcting, and controller
CN110012336B (en) Picture configuration method, terminal and device of live interface
CN108243318B (en) Method and device for realizing live broadcast of multiple image acquisition devices through single interface
WO2023071603A1 (en) Video fusion method and apparatus, electronic device, and storage medium
CN105959562A (en) Method and device for obtaining panoramic photographing data and portable panoramic photographing equipment
CN113297937A (en) Image processing method, device, equipment and medium
WO2023010750A1 (en) Image color mapping method and apparatus, electronic device, and storage medium
WO2023279793A1 (en) Video playing method and apparatus
WO2018025474A1 (en) Information processing device, information processing method, and program
CN108174084A (en) panoramic video processing method and terminal device
CN114339302A (en) Broadcasting guide method, device, equipment and computer storage medium
WO2023010755A1 (en) Hdr video conversion method and apparatus, and device and computer storage medium
WO2023226628A1 (en) Image display method and apparatus, and electronic device and storage medium
US11871137B2 (en) Method and apparatus for converting picture into video, and device and storage medium
CN202841396U (en) Digital film optimization device and digital film projection system
CN112887653B (en) Information processing method and information processing device
CN115334235A (en) Video processing method, device, terminal equipment and storage medium
WO2022237435A1 (en) Method and device for changing background in picture, and storage medium and program product
WO2023010753A1 (en) Color gamut mapping method and apparatus, terminal device, and storage medium
CN113938617A (en) Multi-channel video display method and equipment, network camera and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination