CN111654721A - Video processing method, system, electronic device and storage medium - Google Patents

Video processing method, system, electronic device and storage medium Download PDF

Info

Publication number
CN111654721A
CN111654721A CN202010306464.0A CN202010306464A CN111654721A CN 111654721 A CN111654721 A CN 111654721A CN 202010306464 A CN202010306464 A CN 202010306464A CN 111654721 A CN111654721 A CN 111654721A
Authority
CN
China
Prior art keywords
video file
image data
color image
receiving end
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010306464.0A
Other languages
Chinese (zh)
Inventor
杨天
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing QIYI Century Science and Technology Co Ltd
Original Assignee
Beijing QIYI Century Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing QIYI Century Science and Technology Co Ltd filed Critical Beijing QIYI Century Science and Technology Co Ltd
Priority to CN202010306464.0A priority Critical patent/CN111654721A/en
Publication of CN111654721A publication Critical patent/CN111654721A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams, manipulating MPEG-4 scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/184Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being bits, e.g. of the compressed video stream
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/10Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding
    • H04N19/169Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding
    • H04N19/186Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using adaptive coding characterised by the coding unit, i.e. the structural portion or semantic portion of the video signal being the object or the subject of the adaptive coding the unit being a colour or a chrominance component
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream, rendering scenes according to MPEG-4 scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display

Abstract

The invention relates to a video processing method, a video processing system, electronic equipment and a storage medium, wherein in the video processing system, a sending end is used for acquiring an original video file to be transmitted, and respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; sending an encoded video file containing a plurality of gray level image frames to a receiving end; the receiving end is used for receiving the coded video file sent by the sending end; inputting the coding video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coding video file by using the video coloring model to obtain a decoding video file containing a plurality of color image frames. According to the embodiment of the application, the transmission code stream can be reduced, the video watching effect is not influenced, and the coding efficiency is improved.

Description

Video processing method, system, electronic device and storage medium
Technical Field
The present application relates to the field of computer technologies, and in particular, to a video processing method, a video processing system, an electronic device, and a storage medium.
Background
With the popularization of smart phones and video streaming services, the total consumption and the occupation ratio of digital videos are rapidly increased, the requirements of users on video quality are continuously improved, and the continuous improvement of the quality of video resources can enable the video resources to occupy more bandwidth resources.
However, the video resource occupies more bandwidth resource, which causes the following problems: because the existing bandwidth resources are limited, if the video resources occupy more bandwidth resources, other services using the bandwidth resources cannot be normally performed, which brings inconvenience to users;
in addition, in an unstable network environment, the problem of low transmission speed and high packet loss rate may occur when video resources occupying more bandwidth resources are transmitted, which brings inconvenience to users.
Disclosure of Invention
To solve the above technical problem or at least partially solve the above technical problem, the present application provides a video processing method, a system, an electronic device, and a storage medium.
In a first aspect, the present application provides a video processing system comprising: a sending end and a receiving end;
the transmitting end is used for acquiring an original video file to be transmitted, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; sending an encoded video file containing a plurality of gray image frames to a receiving end;
the receiving end is used for receiving the coded video file sent by the sending end, the coded video file comprises a plurality of gray level image frames, and the image data in each gray level image frame is gray level image data; and inputting the coded video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coded video file by using the video coloring model to obtain a decoded video file containing a plurality of color image frames.
Optionally, the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
the sending end is further configured to obtain a current device state parameter of the sending end, and if the device state parameter meets a preset parameter condition and a decoloring instruction sent by a receiving end is received, convert color image data of a plurality of color image frames in the original video file into gray image data to obtain an encoded video file.
Optionally, the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
the sending end is also used for obtaining the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
Optionally, the receiving end is further configured to: receiving a video coloring model sent by a server; retraining the video coloring model by using the coded video file received from the sending end and the decoded video file corresponding to the coded video file at preset intervals; inputting a test gray image for testing the video coloring model into the video coloring model; and if the similarity between the output image of the video coloring model and the test color image corresponding to the test gray image exceeds a preset similarity threshold, finishing the model training.
In a second aspect, the present application provides a video processing method, including:
the method comprises the steps that a sending end obtains an original video file to be transmitted, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data;
the sending end respectively converts color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames;
the sending end sends a coded video file containing a plurality of gray image frames to a receiving end;
the receiving end receives a coded video file sent by a sending end, the coded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data;
and the receiving end inputs the coded video file into a preset video coloring model, and the video coloring model is utilized to color gray image data in a plurality of gray image frames in the coded video file to obtain a decoded video file containing a plurality of color image frames.
Optionally, the method further comprises:
the receiving end acquires the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
the sending end obtains the current equipment state parameter of the sending end; and if the equipment state parameters meet preset parameter conditions and a decoloring instruction sent by a receiving end is received, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
Optionally, the method further comprises:
the receiving end acquires the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
the sending end obtains the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
Optionally, the step of sending the encoded video file to a receiving end by the sending end includes:
adding a coloring instruction for informing a receiving end to color the coded video file into the coded video file, and sending the coded video file carrying the coloring instruction to the receiving end; or sending the coded video file to the receiving end, and sending a coloring instruction for controlling the receiving end to color the coded video file to the receiving end;
and when the receiving end receives the coloring instruction or receives a coded video file carrying the coloring instruction, the receiving end inputs the coded video file into a preset video coloring model, and the coded video file is colored by using the video coloring model to obtain a decoded video file.
In a third aspect, the present application provides an electronic device, including a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete mutual communication through the communication bus;
a memory for storing a computer program;
the processor is used for acquiring an original video file to be transmitted when a program stored in the memory is executed, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; a method for transmitting an encoded video file containing a plurality of said grayscale image frames to a receiving end;
or receiving an encoded video file sent by a sending end, wherein the encoded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data; and inputting the coded video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coded video file by using the video coloring model to obtain a decoded video file containing a plurality of color image frames.
In a fourth aspect, the present application provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements obtaining an original video file to be transmitted, where the original video file includes a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; a step of transmitting an encoded video file containing a plurality of said grayscale image frames to a receiving end;
or receiving an encoded video file sent by a sending end, wherein the encoded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data; inputting the coding video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coding video file by using the video coloring model to obtain a decoding video file containing a plurality of color image frames.
Compared with the prior art, the technical scheme provided by the embodiment of the application has the following advantages:
the method comprises the steps of obtaining an original video file to be transmitted, respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames, and then sending a coded video file containing the gray image frames to a receiving end; a receiving end receives a coded video file, inputs the coded video file into a preset video coloring model, and finally can utilize the video coloring model to color gray image data in a plurality of gray image frames in the coded video file to obtain a decoded video file containing a plurality of color image frames.
According to the embodiment of the application, the color image data in the original video file can be converted into the gray image data, so that the volume of the video file is greatly reduced, the requirement on network bandwidth in the transmission process of the video file is reduced, and the size of a transmission code stream is reduced; the gray image data in the encoded video file received from the sending end can be restored into color image data, the size of the video file is greatly reduced by converting a color original video file into a black and white encoded video file, the requirement on network bandwidth in the transmission process of the video file is reduced, the size of a transmission code stream is reduced, meanwhile, the black and white encoded video file can be restored into a color decoded video file at the receiving end, a user can be ensured to still watch the color decoded video file, the effect of watching the video is not influenced while the transmission code stream is reduced, and the encoding efficiency is improved.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious for those skilled in the art that other drawings can be obtained according to the drawings without inventive exercise.
Fig. 1 is a flowchart of a video processing method according to an embodiment of the present application;
fig. 2 is another flowchart of a video processing method according to an embodiment of the present application;
fig. 3 is another flowchart of a video processing method according to an embodiment of the present application;
fig. 4 is a block diagram of a video processing system according to an embodiment of the present application;
fig. 5 is a schematic structural diagram of an electronic device according to an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Because the existing bandwidth resources are limited, if the video resources occupy more bandwidth resources, other services using the bandwidth resources cannot be normally performed, which brings inconvenience to users; in addition, in an unstable network environment, the problem of low transmission speed and high packet loss rate may occur when video resources occupying more bandwidth resources are transmitted, which brings inconvenience to users. To this end, an embodiment of the present application provides a video processing method, as shown in fig. 1, the method includes:
step S101, a sending end obtains an original video file to be transmitted.
In the embodiment of the invention, the original video file comprises a plurality of color image frames, and the image data in each color image frame is color image data;
the sending end may refer to a terminal device, for example: a video acquisition terminal, such as a mobile phone, a camera or a computer with a video acquisition function; or a video providing terminal, such as a server providing video resources to a client.
In the embodiment of the present invention, the original video file may refer to a video file to be transmitted to the receiving end by the transmitting end.
Step S102, the sending end respectively converts color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames;
in this step, a plurality of color image frames in the original video file may be first extracted, and the color image data in each color image frame is converted into grayscale image data, so that a plurality of grayscale image frames may be obtained.
Specifically, the color coding format of the original video file may be determined first; for example, the color coding format of the original video file may be determined by reading the attribute information of the original video file, where the color coding format is mainly described as YUV format and RGB format, and in practical applications, if the color coding format of the original video file is other format, the original video file may be converted into YUV format or RGB format, and then the processing may be performed in the following manner; and only the data of the brightness channel and the data of the color channel can be reserved according to the characteristics of other formats.
And if the color coding format of the original video file is a YUV format, extracting Y-channel data of color image data in a plurality of color image frames in the original video file to obtain a plurality of gray image frames.
Because the YUV format is an existing color representation method, where Y represents the luminance; u and V represent two aspects of color chromaticity, the signals sampled from red and blue, respectively; if there is no UV signal, the video will be changed to black and white video, so in the embodiment of the present invention, Y-channel data of color image data in a plurality of color image frames in the original video file may be extracted to obtain a plurality of grayscale image frames.
Illustratively, if the Y channel data of a certain pixel point in a certain image frame is 123, the U channel data is 158, and the V channel data is 232, only the Y channel data 123 of the pixel point is retained, and other pixel points in the image frame also retain only the Y channel data according to the principle, and the pixel point in each image frame in the whole original video file also retains only the Y channel data according to the manner, so that the original video file can be converted from color to black and white very simply and efficiently, without a large amount of operations, and system resources are saved.
If the color coding format of the original video file is an RGB format, respectively converting the pixel values of the color space in each image frame of the original video file into the pixel values of the gray scale space according to a preset RGB and gray scale conversion relational expression to obtain a plurality of gray scale image frames and obtain a plurality of gray scale image frames.
Since R represents red, G represents green, and B represents blue in the RGB format, which is the most basic method for representing three primary colors, any color can be mixed out by the three primary colors according to different proportions, so that if converting an RGB image in an original video file into a gray scale image, it is necessary to convert pixel values of a color space in each image frame of the original video file into pixel values of a gray scale space respectively according to the relationship between RGB and gray scale conversion, so as to obtain a plurality of gray scale image frames.
For example, if R channel data of a certain pixel point in a certain image frame is 112, G channel data is 183, and B channel data is 222, RGB and gray level conversion relationship may be: the Gray is calculated to be equal to 40, and according to the method, the original video file in the RGB format can be directly converted into black and white, the method is simpler and has higher efficiency, and in practical application, the conversion relationship between the RGB and the Gray scale can be as follows: gray ═ (R38 + G75 + B15) > >7, and so on.
The embodiment of the invention can obtain a plurality of gray image frames by using the Y channel data of the color image data in a plurality of color image frames in the original video file with the color coding format of YUV format or RGB format, thereby greatly reducing the volume of the video file, being convenient for reducing the requirement on network bandwidth in the transmission process of the video file and reducing the size of the transmission code stream.
Step S103, the sending end sends an encoded video file containing a plurality of gray image frames to a receiving end;
step S104, the receiving end receives the coded video file sent by the sending end;
in the embodiment of the invention, the coded video file comprises a plurality of gray image frames, and the image data in each gray image frame is gray image data;
step S105, the receiving end inputs the coded video file into a preset video coloring model, and the video coloring model is utilized to color gray image data in a plurality of gray image frames in the coded video file, so as to obtain a decoded video file containing a plurality of color image frames.
In the embodiment of the present invention, the video rendering model is used to render the encoded video file, and for example, the video rendering model may be a generation countermeasure network (GANs) model with a self-attention mechanism and a deep learning of a dual time scale update rule, and the digital image or video is rendered by using a deoidify algorithm. In practical applications, other models capable of rendering a video file may also be used, which are only used for illustration and do not limit the scope of the present invention.
The method comprises the steps of obtaining an original video file to be transmitted, respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames, and then sending a coded video file containing the gray image frames to a receiving end; a receiving end receives a coded video file, inputs the coded video file into a preset video coloring model, and finally can utilize the video coloring model to color gray image data in a plurality of gray image frames in the coded video file to obtain a decoded video file containing a plurality of color image frames.
According to the embodiment of the application, the color image data in the original video file can be converted into the gray image data, so that the volume of the video file is greatly reduced, the requirement on network bandwidth in the transmission process of the video file is reduced, and the size of a transmission code stream is reduced; the gray image data in the encoded video file received from the sending end can be restored into color image data, the size of the video file is greatly reduced by converting a color original video file into a black and white encoded video file, the requirement on network bandwidth in the transmission process of the video file is reduced, the size of a transmission code stream is reduced, meanwhile, the black and white encoded video file can be restored into a color decoded video file at the receiving end, a user can be ensured to still watch the color decoded video file, the effect of watching the video is not influenced while the transmission code stream is reduced, and the encoding efficiency is improved.
In practical applications, the device status of the sender itself may not be suitable for the sender to perform the decoloring of the original video file in some cases, such as: if the network bandwidth is too low, the decolored coded video file may not be transmitted to the receiving end; or, the CPU occupancy rate is too low, which can cause the decoloring process to be too slow; or, the residual capacity is too low, which may cause the decoloring process to be incomplete; or the moving speed is too fast, the decolored coded video file can not be transmitted to the receiving end, for the receiving end, after the original video file is decolored at the sending end, if the self equipment state is not suitable for coloring the coded video file, the black and white coded video file can not be restored to be colored, and the watching of the user is influenced; to avoid these two situations, in a further embodiment of the present invention, as shown in fig. 2, the method further comprises:
step S201, the receiving end obtains the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
in the embodiment of the present invention, the device state parameters acquired by the receiving end include: network bandwidth utilization rate, CPU occupancy rate, residual electric quantity and moving speed;
the condition that the preset parameter is satisfied in the embodiment of the present invention may mean that one or more of the network bandwidth utilization rate, the CPU occupancy, the remaining power, the moving speed, and the like are less than the corresponding preset threshold. The preset parameter condition can be used for measuring the equipment state of the receiving end, if the equipment state parameter of the receiving end meets the preset parameter condition, that is: the device status can color the encoded video file, at which time a decolouration instruction can be sent to the sender.
If the device state parameter at the receiving end does not satisfy the preset parameter condition, that is: the equipment state of the receiving end can not color the coded video file, and the receiving end sends a decoloring canceling instruction for informing the sending end to cancel the conversion of the color image data in the original video file into the gray image data to the sending end.
Step S202, the sending end obtains the current equipment state parameter of the sending end; and if the equipment state parameters meet preset parameter conditions and a decoloring instruction sent by a receiving end is received, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
In this embodiment of the present invention, the device status parameters of the sending end may include: network bandwidth utilization rate, CPU occupancy rate, residual capacity, moving speed and the like.
The condition that the equipment state parameter of the sending end meets the preset parameter condition can mean that one or more of the network bandwidth utilization rate, the CPU occupancy rate, the residual electric quantity, the moving speed and the like are less than the corresponding preset threshold value.
For example, since a certain network bandwidth is occupied by the encoded video file converted into black and white through network transmission, it is necessary to determine whether the current network bandwidth utilization rate of the sending end is less than a corresponding preset threshold (e.g., a network bandwidth utilization rate threshold) before the video is converted into black and white through color transmission, and if the current network bandwidth utilization rate of the sending end is less than the corresponding preset threshold, it can be considered that if the video is converted into black and white through color transmission at this time, and the encoded video file converted into black and white is transmitted, no influence is generated on other services of the sending end using the network;
similarly, since converting the original video file from color to black and white occupies a certain amount of CPU resources, it is necessary to determine whether the current CPU occupancy rate of the sending end is less than the corresponding preset threshold (e.g., CPU occupancy threshold) before converting the color of the video into black and white, and if the current CPU occupancy rate of the sending end is less than the corresponding preset threshold, it can be considered that the conversion from color to black and white of the video will not affect other services using the CPU resources at the sending end;
similarly, since the original video file is converted from color to black and white, which occupies a certain power resource, before the conversion of the color to black and white of the video, it needs to be determined whether the current remaining power of the sending terminal is smaller than the corresponding preset threshold (for example, the remaining power threshold), and if the current remaining power of the sending terminal is smaller than the corresponding preset threshold, it can be considered that the conversion of the color to black and white of the video at this time does not affect other services consuming the power resource at the sending terminal.
Because a certain network transmission rate is required for transmitting the encoded video file converted into black and white through the network, if the transmitting end has a large moving speed (such as on a high-speed rail or on a subway) at the moment, the transmission of the encoded video file may be affected, so that before the transmitting end performs color-to-black conversion of the video, whether the current moving speed of the transmitting end is smaller than a corresponding preset threshold (such as a moving speed threshold) or not needs to be determined, and if the current moving speed of the transmitting end is smaller than the corresponding preset threshold, it can be considered that if the color-to-black-and-white conversion of the video is performed at the moment, the encoded video file can be normally transmitted and converted into the black and.
The method determines whether to convert the color of the video into the black and white according to the current network bandwidth utilization rate, the CPU occupancy rate, the residual capacity and the moving speed of the sending end, can automatically control whether to convert the color of the video into the black and white according to the running condition of the sending end, and can also set other variables except the network bandwidth utilization rate, the CPU occupancy rate, the residual capacity and the moving speed in practical application, such as whether to convert the color of the video into the black and white and the like which are manually set by a user so as to determine whether the sending end is suitable for converting the color of the video into the black and white.
Based on the foregoing, in an implementation manner of the embodiment of the present invention, different priorities may also be set for the device status parameters, for example: the method comprises the steps that the network bandwidth utilization rate corresponds to a high priority, the CPU occupancy rate corresponds to a high priority, the moving speed corresponds to a medium priority, and the residual capacity corresponds to a low priority, so that when the condition that whether equipment state parameters meet preset parameter conditions or not is determined, whether the network bandwidth utilization rate with the highest priority meets the preset parameter conditions or not can be determined preferentially, if the network bandwidth utilization rate meets the preset parameter conditions, color image data in a plurality of color image frames in an original video file can be directly converted into gray image data respectively, and a plurality of gray image frames are obtained.
Based on the foregoing, in another implementation manner of the embodiment of the present invention, different weights may be set for the device status parameters corresponding to different priorities, for example: high priority corresponds to large weight and low priority corresponds to small weight, for example: the weight of the network bandwidth utilization rate is 0.6, the weight of the CPU occupancy rate is 0.5, the weight of the moving speed is 0.4, and the weight of the residual capacity is 0.3, then the network bandwidth utilization rate is calculated to be 0.6+ the CPU occupancy rate is 0.5+ the moving speed is 0.4+ the residual capacity is 0.3, the obtained result is compared with a preset threshold value, and if the obtained result is greater than the preset threshold value, the color image data in a plurality of color image frames in the original video file can be respectively converted into gray image data, so that a plurality of gray image frames are obtained.
The embodiment of the invention can automatically control whether the sending end converts the color of the original video file into black and white or not through the decolorizing instruction or the decolorizing instruction cancellation according to the equipment state parameter of the receiving end, realizes the coordination and coordination of the color conversion of the receiving end and the sending end, and is convenient for users to use; and when the device state parameters of the sending end meet preset parameter conditions and a decoloring instruction sent by the receiving end is received, the color image data in the multiple color image frames in the original video file are respectively converted into gray image data to obtain multiple gray image frames, so that the sending end is ensured to be in a proper working state, and further, the decoloring process of the original video file and the process of transmitting the encoded video file to the receiving end are ensured to be complete.
In yet another embodiment of the present invention, as shown in fig. 3, the method further comprises:
step S301, the receiving end acquires the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
in this embodiment of the present invention, the device status parameters include: network bandwidth utilization rate, CPU occupancy rate, residual electric quantity and moving speed;
in the embodiment of the application, the triggering condition judgment model may be a deep learning prediction model which is previously trained and converged, the input of the triggering condition judgment model may be network bandwidth utilization rate, CPU occupancy rate, remaining capacity, moving speed, and the like, and the output is a result of whether to trigger, and if the output of the triggering condition judgment model is "triggering", it may be determined that the sending end satisfies a triggering decoding condition; if the output of the trigger condition judgment model is 'no trigger', the sending end can be determined not to meet the trigger decoding condition.
And if the equipment state parameter does not meet the preset parameter condition, sending a decoloring canceling instruction for informing the sending end to cancel the conversion of the color image data in the original video file into the gray image data to the sending end.
Step S302, the sending end obtains the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
In this embodiment of the present invention, the device status parameters include: network bandwidth utilization rate, CPU occupancy rate, residual capacity and moving speed.
In the embodiment of the application, the triggering condition judgment model may be a deep learning prediction model which is previously trained and converged, the input of the triggering condition judgment model may be network bandwidth utilization rate, CPU occupancy rate, remaining capacity, moving speed, and the like, and the output is a result of whether to trigger, and if the output of the triggering condition judgment model is "triggering", it may be determined that the sending end satisfies a triggering coding condition; if the output of the trigger condition judgment model is 'no trigger', the sending end can be determined not to meet the trigger coding condition.
If the equipment state parameter does not meet the preset parameter condition, or a decoloring cancellation instruction sent by a receiving end is received, color image data of a plurality of color image frames in the original video file are not converted into gray image data, and a coloring instruction for canceling coloring the coded video file is sent to the receiving end.
The embodiment of the invention can automatically control whether the sending end converts the color of the original video file into black and white or not through the decolorizing instruction or the decolorizing instruction cancellation according to the equipment state parameter of the receiving end, realizes the coordination and coordination of the color conversion of the receiving end and the sending end, and is convenient for users to use; and when the device state parameters of the sending end meet preset parameter conditions and a decoloring instruction sent by the receiving end is received, the color image data in the multiple color image frames in the original video file are respectively converted into gray image data to obtain multiple gray image frames, so that the sending end is ensured to be in a proper working state, and further, the decoloring process of the original video file and the process of transmitting the encoded video file to the receiving end are ensured to be complete.
In another embodiment of the present invention, the step of sending the encoded video file to a receiving end by the sending end includes:
adding a coloring instruction for informing a receiving end to color the coded video file into the coded video file, and sending the coded video file carrying the coloring instruction to the receiving end; or sending the coded video file to the receiving end, and sending a coloring instruction for controlling the receiving end to color the coded video file to the receiving end;
according to the embodiment of the invention, the coloring instruction is added into the coded video file or is independently sent to the receiving end, so that the receiving end can conveniently and quickly know whether the received coded video file is a decolored coded video file, the receiving end does not need to know whether the video file needs to be colored after processing the video file, and the efficiency of coloring the coded video file at the receiving end is conveniently improved.
And when the receiving end receives the coloring instruction or receives a coded video file carrying the coloring instruction, the receiving end inputs the coded video file into a preset video coloring model, and the coded video file is colored by using the video coloring model to obtain a decoded video file.
And if a coloring instruction for canceling coloring the coded video file is received, the step of inputting the coded video file into a preset video coloring model and coloring the coded video file by using the video coloring model to obtain a decoded video file is not executed.
The receiving end can start to color the coded video file when receiving the coloring instruction, no matter the coloring instruction is added into the coded video file or is independently sent to the receiving end, so that the coordination of color conversion between the receiving end and the sending end is realized, and the use by a user is facilitated.
In still another embodiment of the present invention, there is also provided a video processing system, as shown in fig. 4, including: a transmitting end 11 and a receiving end 12;
the sending end 11 is configured to obtain an original video file to be transmitted, where the original video file includes a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; sending an encoded video file containing a plurality of gray image frames to a receiving end;
the receiving end 12 is configured to receive an encoded video file sent by a sending end, where the encoded video file includes a plurality of grayscale image frames, and image data in each grayscale image frame is grayscale image data; and inputting the coded video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coded video file by using the video coloring model to obtain a decoded video file containing a plurality of color image frames.
In another embodiment of the present invention, the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
the sending end is further configured to obtain a current device state parameter of the sending end, and if the device state parameter meets a preset parameter condition and a decoloring instruction sent by a receiving end is received, convert color image data of a plurality of color image frames in the original video file into gray image data to obtain an encoded video file.
In another embodiment of the present invention, the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
the sending end is also used for obtaining the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
In another embodiment of the present invention, the receiving end is further configured to: receiving a video coloring model sent by a server; retraining the video coloring model by using the coded video file received from the sending end and the decoded video file corresponding to the coded video file at preset intervals; inputting a test gray image for testing the video coloring model into the video coloring model; and if the similarity between the output image of the video coloring model and the test color image corresponding to the test gray image exceeds a preset similarity threshold, finishing the model training.
In the embodiment of the invention, the receiving end can receive the trained video coloring model from the server end.
The server may train the video rendering model in advance, for example: the coloring model training can be performed by using a generation countermeasure network, and can be a DeOldfy algorithm based on an attention mechanism and double time scale updating.
The server side can train the video coloring model by utilizing a plurality of gray level image samples and the color image labels corresponding to the gray level image samples; inputting a test gray level image into the video rendering model; and if the similarity between the output image of the video coloring model and the test color image corresponding to the test gray image exceeds a preset similarity threshold, finishing the model training.
In the embodiment of the present invention, the grayscale image sample may refer to a black and white image selected as a training sample, and the color image label may refer to a color image corresponding to the training sample or image information of the color image. In practical application, a color image can be converted into a black-and-white image, the black-and-white image is used as a gray image sample, and the color image is used as a color image label.
The server side can perform compression processing and inference optimization on the trained video coloring model, can be TensorFlow Lite or TensorRT based on model quantization, and distributes the video coloring model to each receiving end after the compression processing and inference optimization.
In the embodiment of the present invention, the test gray image should be different from the gray image sample, because the video rendering model has learned the gray image sample, and if the gray image sample is repeatedly used for testing, the video rendering capability of the video rendering model cannot be accurately reflected, that is, if the gray image sample of the trained video rendering model is repeatedly used for testing the video rendering model, the test result will be 100% accurate, but in reality, the video rendering model may be only 30% accurate when facing a new test gray image (of the untrained video rendering model), and therefore, the test gray image used should be different from the gray image sample.
In the embodiment of the present invention, in order to ensure that the video coloring model can accurately color the video file, the coloring capability of the video coloring model needs to be tested by using the test gray image after the video coloring model is trained, so as to ensure that the video coloring model can accurately color the encoded video file received by the receiving end.
In another embodiment of the present invention, an electronic device is further provided, which includes a processor, a communication interface, a memory, and a communication bus, where the processor, the communication interface, and the memory complete communication with each other through the communication bus;
a memory for storing a computer program;
and the processor is used for realizing the video processing method applied to the sending end or the video processing method applied to the receiving end when executing the program stored in the memory.
In the electronic device provided by the embodiment of the invention, the processor realizes the acquisition of an original video file to be transmitted by executing a program stored in the memory, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data; converting color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file; the coded video file is sent to a receiving end, so that color image data in an original video file can be converted into gray image data, the volume of the video file is greatly reduced, the requirement on network bandwidth in the transmission process of the video file is reduced, and the size of a transmission code stream is reduced;
or, receiving an encoded video file sent by a sending end by executing a program stored in a memory, wherein the encoded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data; the method comprises the steps of inputting a coded video file into a preset video coloring model, coloring the coded video file by using the video coloring model to obtain a decoded video file, ensuring that gray image data in the coded video file received from a sending end can be recovered into color image data, greatly reducing the volume of the video file, reducing the requirement on network bandwidth in the transmission process of the video file and reducing the size of a transmission code stream by converting a color original video file into a black and white coded video file, recovering the black and white coded video file into the color decoded video file at a receiving end, ensuring that a user can still watch the color decoded video file, reducing the transmission code stream, not influencing the video watching effect and improving the coding efficiency.
The communication bus 1140 mentioned in the above electronic device may be a Peripheral Component Interconnect (PCI) bus, an Extended Industry Standard Architecture (EISA) bus, or the like. The communication bus 1140 may be divided into an address bus, a data bus, a control bus, and the like. For ease of illustration, only one thick line is shown in FIG. 5, but this is not intended to represent only one bus or type of bus.
The communication interface 1120 is used for communication between the electronic device and other devices.
The Memory 1130 may include a Random Access Memory (RAM) or a non-volatile Memory (non-volatile Memory), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor 1110 may be a general-purpose processor, and includes a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), a Network Processor (NP), and the like; the Integrated Circuit may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete gate or transistor logic device, or discrete hardware components.
In still another embodiment of the present invention, there is also provided a computer-readable storage medium having stored thereon a program of a video processing method, which when executed by a processor, implements the steps of the video processing method described in the method embodiment applied to the transmitting end or the video processing method described in the method embodiment applied to the receiving end.
It is noted that, in this document, relational terms such as "first" and "second," and the like, may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
The foregoing are merely exemplary embodiments of the present invention, which enable those skilled in the art to understand or practice the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A video processing system, comprising: a sending end and a receiving end;
the transmitting end is used for acquiring an original video file to be transmitted, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; sending an encoded video file containing a plurality of gray image frames to a receiving end;
the receiving end is used for receiving the coded video file sent by the sending end, the coded video file comprises a plurality of gray level image frames, and the image data in each gray level image frame is gray level image data; and inputting the coded video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coded video file by using the video coloring model to obtain a decoded video file containing a plurality of color image frames.
2. The video processing system of claim 1,
the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
the sending end is further configured to obtain a current device state parameter of the sending end, and if the device state parameter meets a preset parameter condition and a decoloring instruction sent by a receiving end is received, convert color image data of a plurality of color image frames in the original video file into gray image data to obtain an encoded video file.
3. The video processing system of claim 1,
the receiving end is further configured to: acquiring the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
the sending end is also used for obtaining the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
4. The video processing system of claim 1,
the receiving end is further configured to: receiving a video coloring model sent by a server; retraining the video coloring model by using the coded video file received from the sending end and the decoded video file corresponding to the coded video file at preset intervals; inputting a test gray image for testing the video coloring model into the video coloring model; and if the similarity between the output image of the video coloring model and the test color image corresponding to the test gray image exceeds a preset similarity threshold, finishing the model training.
5. A method of video processing, the method comprising:
the method comprises the steps that a sending end obtains an original video file to be transmitted, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data;
the sending end respectively converts color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames;
the sending end sends a coded video file containing a plurality of gray image frames to a receiving end;
the receiving end receives a coded video file sent by a sending end, the coded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data;
and the receiving end inputs the coded video file into a preset video coloring model, and the video coloring model is utilized to color gray image data in a plurality of gray image frames in the coded video file to obtain a decoded video file containing a plurality of color image frames.
6. The video processing method of claim 5, wherein the method further comprises:
the receiving end acquires the current equipment state parameter of the receiving end; if the equipment state parameters meet preset parameter conditions, sending a decolorizing instruction for informing the sending end of converting color image data in the original video file into gray image data to the sending end;
the sending end obtains the current equipment state parameter of the sending end; and if the equipment state parameters meet preset parameter conditions and a decoloring instruction sent by a receiving end is received, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
7. The video processing method of claim 5, wherein the method further comprises:
the receiving end acquires the current equipment state parameter of the receiving end; inputting the equipment state parameters into a trigger condition judgment model, and determining whether a receiving end meets a trigger decoding condition by using the trigger condition judgment model; if the receiving end meets the trigger decoding condition, a decoloring instruction for informing the sending end of converting color image data in the original video file into gray image data is sent to the sending end;
the sending end obtains the current equipment state parameter of the sending end; inputting the equipment state parameters into a preset trigger condition judgment model, and determining whether a sending end meets a trigger coding condition or not by using the trigger condition judgment model; and if the sending end meets the triggering coding condition and receives a decolorizing instruction sent by a receiving end, executing the step of converting the color image data of a plurality of color image frames in the original video file into gray image data to obtain a coded video file.
8. The video processing method according to claim 5, wherein the step of transmitting the encoded video file from the transmitting end to the receiving end comprises:
adding a coloring instruction for informing a receiving end to color the coded video file into the coded video file, and sending the coded video file carrying the coloring instruction to the receiving end; or sending the coded video file to the receiving end, and sending a coloring instruction for controlling the receiving end to color the coded video file to the receiving end;
and when the receiving end receives the coloring instruction or receives a coded video file carrying the coloring instruction, the receiving end inputs the coded video file into a preset video coloring model, and the coded video file is colored by using the video coloring model to obtain a decoded video file.
9. An electronic device is characterized by comprising a processor, a communication interface, a memory and a communication bus, wherein the processor and the communication interface are used for realizing mutual communication by the memory through the communication bus;
a memory for storing a computer program;
the processor is used for acquiring an original video file to be transmitted when a program stored in the memory is executed, wherein the original video file comprises a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; a method for transmitting an encoded video file containing a plurality of said grayscale image frames to a receiving end;
or receiving an encoded video file sent by a sending end, wherein the encoded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data; and inputting the coded video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coded video file by using the video coloring model to obtain a decoded video file containing a plurality of color image frames.
10. A computer-readable storage medium, wherein a computer program is stored on the computer-readable storage medium, and when executed by a processor, the computer program implements obtaining an original video file to be transmitted, where the original video file includes a plurality of color image frames, and image data in each color image frame is color image data; respectively converting color image data in a plurality of color image frames in the original video file into gray image data to obtain a plurality of gray image frames; a step of transmitting an encoded video file containing a plurality of said grayscale image frames to a receiving end;
or receiving an encoded video file sent by a sending end, wherein the encoded video file comprises a plurality of gray level image frames, and image data in each gray level image frame is gray level image data; inputting the coding video file into a preset video coloring model, and coloring the gray image data in a plurality of gray image frames in the coding video file by using the video coloring model to obtain a decoding video file containing a plurality of color image frames.
CN202010306464.0A 2020-04-17 2020-04-17 Video processing method, system, electronic device and storage medium Pending CN111654721A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010306464.0A CN111654721A (en) 2020-04-17 2020-04-17 Video processing method, system, electronic device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010306464.0A CN111654721A (en) 2020-04-17 2020-04-17 Video processing method, system, electronic device and storage medium

Publications (1)

Publication Number Publication Date
CN111654721A true CN111654721A (en) 2020-09-11

Family

ID=72346040

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010306464.0A Pending CN111654721A (en) 2020-04-17 2020-04-17 Video processing method, system, electronic device and storage medium

Country Status (1)

Country Link
CN (1) CN111654721A (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261349A (en) * 2020-10-23 2021-01-22 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN112887293A (en) * 2021-01-21 2021-06-01 维沃移动通信有限公司 Streaming media processing method and device and electronic equipment
CN113411550A (en) * 2020-10-29 2021-09-17 腾讯科技(深圳)有限公司 Video coloring method, device, equipment and storage medium
CN114862705A (en) * 2022-04-25 2022-08-05 陕西西影数码传媒科技有限责任公司 Image quality evaluation method for image color restoration

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1332575A (en) * 2000-07-10 2002-01-23 刘伟 Dynamic digital image transmitting method and device
CN107343200A (en) * 2017-07-06 2017-11-10 中南大学 A kind of multilevel color image compression and decompression method and device based on optical means
CN109754444A (en) * 2018-02-07 2019-05-14 京东方科技集团股份有限公司 Image rendering methods and device
CN110163801A (en) * 2019-05-17 2019-08-23 深圳先进技术研究院 A kind of Image Super-resolution and color method, system and electronic equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN1332575A (en) * 2000-07-10 2002-01-23 刘伟 Dynamic digital image transmitting method and device
CN107343200A (en) * 2017-07-06 2017-11-10 中南大学 A kind of multilevel color image compression and decompression method and device based on optical means
CN109754444A (en) * 2018-02-07 2019-05-14 京东方科技集团股份有限公司 Image rendering methods and device
CN110163801A (en) * 2019-05-17 2019-08-23 深圳先进技术研究院 A kind of Image Super-resolution and color method, system and electronic equipment

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112261349A (en) * 2020-10-23 2021-01-22 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN112261349B (en) * 2020-10-23 2023-02-21 北京字节跳动网络技术有限公司 Image processing method and device and electronic equipment
CN113411550A (en) * 2020-10-29 2021-09-17 腾讯科技(深圳)有限公司 Video coloring method, device, equipment and storage medium
CN112887293A (en) * 2021-01-21 2021-06-01 维沃移动通信有限公司 Streaming media processing method and device and electronic equipment
CN114862705A (en) * 2022-04-25 2022-08-05 陕西西影数码传媒科技有限责任公司 Image quality evaluation method for image color restoration
CN114862705B (en) * 2022-04-25 2022-11-25 陕西西影数码传媒科技有限责任公司 Image quality evaluation method for image color restoration

Similar Documents

Publication Publication Date Title
CN111654721A (en) Video processing method, system, electronic device and storage medium
US9699099B2 (en) Method of transmitting data in a communication system
CN108696761B (en) Picture file processing method, device and system
EP1864255B1 (en) Block-based image compression method and apparatus
US20210377542A1 (en) Video encoding and decoding method, device, and system, and storage medium
EP2272237B1 (en) Method of transmitting data in a communication system
CN112929624B (en) Encoding method, encoding device, electronic equipment and computer readable storage medium
CN107665128B (en) Image processing method, system, server and readable storage medium
CN111182303A (en) Encoding method and device for shared screen, computer readable medium and electronic equipment
CN101401434A (en) A method and device for adapting a temporal frequency of a sequence of video images
CN109831668A (en) Data compression method and apparatus, data encoding/decoding method and device
CN109151503B (en) Picture file processing method and equipment
CN112422963A (en) Cloud computer system, encoding method, device, equipment and storage medium
CN108924557B (en) Image processing method, image processing device, electronic equipment and storage medium
CN111200693A (en) Image data transmission method, device and system
CN111476866B (en) Video optimization and playing method, system, electronic equipment and storage medium
CN116567246A (en) AVC coding method and device
JP4411411B2 (en) Data compression apparatus and data compression method
CN108810537B (en) Picture transcoding method and device and image processing equipment
CN115225881A (en) Data transmission method, device, equipment and storage medium
CN110969672A (en) Image compression method and device
CN108370442B (en) Method and device for processing high dynamic range image and computer readable storage medium
CN114339226B (en) Method, device and medium for improving smoothness of picture
US11670008B2 (en) Processing display data for transmission
CN111385579B (en) Video compression method, device, equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20200911