CN108235119B - Video processing method and device, electronic equipment and computer readable medium - Google Patents

Video processing method and device, electronic equipment and computer readable medium Download PDF

Info

Publication number
CN108235119B
CN108235119B CN201810216232.9A CN201810216232A CN108235119B CN 108235119 B CN108235119 B CN 108235119B CN 201810216232 A CN201810216232 A CN 201810216232A CN 108235119 B CN108235119 B CN 108235119B
Authority
CN
China
Prior art keywords
groups
video frame
pixel
pixel groups
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810216232.9A
Other languages
Chinese (zh)
Other versions
CN108235119A (en
Inventor
肖剑峰
史成耀
杨永恒
郑新宇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Lenovo Beijing Ltd
Original Assignee
Lenovo Beijing Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Lenovo Beijing Ltd filed Critical Lenovo Beijing Ltd
Priority to CN201810216232.9A priority Critical patent/CN108235119B/en
Publication of CN108235119A publication Critical patent/CN108235119A/en
Application granted granted Critical
Publication of CN108235119B publication Critical patent/CN108235119B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2343Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements
    • H04N21/234363Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving reformatting operations of video signals for distribution or compliance with end-user requests or end-user device requirements by altering the spatial resolution, e.g. for clients with a lower screen resolution
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4402Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display
    • H04N21/440263Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving reformatting operations of video signals for household redistribution, storage or real-time display by altering the spatial resolution, e.g. for displaying on a connected PDA

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)

Abstract

The present disclosure provides a video processing method, which includes acquiring a first video frame; determining a preset line in a first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value; adjusting respective first color values of a plurality of pixel points contained in a preset row to respective corresponding second color values to obtain a second video frame; and replacing the first video frame with a second video frame so that the receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame. The present disclosure also provides a video processing apparatus, an electronic device, and a computer readable medium.

Description

Video processing method and device, electronic equipment and computer readable medium
Technical Field
The disclosure relates to a video processing method and apparatus, an electronic device, and a computer-readable medium.
Background
With the development of scientific technology, video transmission technology is more mature, and the application range of the video transmission technology is wider and wider. For example, a better video interaction between the sending end and the receiving end can be realized through a video transmission technology, or the sending end can play the recorded video online or offline after recording the video, so that the receiving end can watch the video. In the process of playing the video, sometimes a sending end of the video needs to perform some operations on the video at some critical time, for example, a special effect is used on a video frame at a certain critical time. However, due to the problem of time delay in the playing process of the video, it is difficult for the receiving end of the video to determine which frame the video frame operated by the sending end is specifically, according to the timestamp, which results in poor user experience.
Disclosure of Invention
One aspect of the present disclosure provides a video processing method, including obtaining a first video frame; determining a preset line in the first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value; adjusting respective first color values of the plurality of pixel points included in the preset line to respective corresponding second color values to obtain a second video frame; and replacing the first video frame with the second video frame so that a receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame.
Optionally, adjusting the respective first color values of the plurality of pixel points included in the preset row to the respective corresponding second color values includes dividing the plurality of pixel points included in the preset row into N groups of pixel groups, where each group of pixel groups in the N groups of pixel groups includes one or more pixel points; respectively determining respective target color information for each group of pixel groups in the N groups of pixel groups; and adjusting the first color values of one or more pixel points in each group of pixel groups to the corresponding second color values according to the target color information corresponding to each group of pixel groups.
Optionally, adjusting the respective first color values of the plurality of pixel points included in the preset row to the respective corresponding second color values includes dividing the plurality of pixel points included in the preset row into N groups of pixel groups, where each group of pixel groups in the N groups of pixel groups includes one or more pixel points; determining M groups of pixel groups from the N groups of pixel groups, wherein M is an integer less than N; determining respective target color information for each of the N groups of pixel groups except the M groups of pixel groups; adjusting respective first color values of pixels in each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups to corresponding second color values according to respective target color information of each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups; acquiring an operation instruction, wherein the operation instruction is used for adjusting the display effect of the first video frame; and adjusting the first color value of the pixel point in each group of the M groups of pixel groups to a numerical value corresponding to the operation instruction.
Optionally, determining the preset line in the first video frame includes determining a scaling when the first video frame is transmitted; and determining the number of the preset lines according to the scaling of the first video frame.
Optionally, in a case that the number of the preset lines is determined to be multiple according to the scaling when the first video frame is transmitted, the video processing method further includes determining whether the first video frame needs to be compressed according to the scaling when the first video frame is transmitted; and under the condition that the first video frame needs to be compressed, adjusting the respective first color values of the pixel points of each line in the preset lines to be the same color values.
Another aspect of the disclosure provides a video processing apparatus including an obtaining module, a determining module, a first adjusting module, and a replacing module. The acquisition module is used for acquiring a first video frame; the determining module is used for determining a preset line in the first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value; the first adjusting module is used for adjusting respective first color values of the plurality of pixel points contained in the preset line to respective corresponding second color values to obtain a second video frame; and the replacing module is used for replacing the first video frame with the second video frame so that a receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame.
Optionally, the first adjusting module includes a first classifying unit, a first determining unit, and a first adjusting unit. The first classification unit is used for classifying the plurality of pixel points contained in the preset row into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups contains one or more pixel points; the first determining unit is used for respectively determining respective target color information for each group of the N groups of pixel groups; and the first adjusting unit is used for adjusting the first color value of one or more pixel points in each group of pixel groups into the corresponding second color value according to the target color information corresponding to each group of pixel groups.
Optionally, the first adjusting module includes a second classifying unit, a second determining unit, a third determining unit, a second adjusting unit, an obtaining unit, and a third adjusting unit. The second classification unit is used for classifying the plurality of pixel points contained in the preset row into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups contains one or more pixel points; a second determining unit configured to determine M groups of pixel groups from the N groups of pixel groups, where M is an integer smaller than N; a third determining unit, configured to determine respective target color information for each of the N groups of pixel groups except the M groups of pixel groups; the second adjusting unit is used for adjusting the respective first color values of the pixels in each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups to corresponding second color values according to the respective target color information of each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups; the acquisition unit is used for acquiring an operation instruction, wherein the operation instruction is used for adjusting the display effect of the first video frame; and the third adjusting unit is used for adjusting the first color value of the pixel point in each group of pixel groups in the M groups of pixel groups to a numerical value corresponding to the operation instruction.
Optionally, the determining module includes a fourth determining unit and a fifth determining unit. The fourth determining unit is used for determining the scaling when the first video frame is transmitted; and a fifth determining unit for determining the number of the preset lines according to the scaling of the first video frame.
Optionally, in a case that the number of the preset lines is determined to be multiple according to the scaling when the first video frame is transmitted, the video processing apparatus further includes a determining module and a second adjusting module. The judging module is used for judging whether the first video frame needs to be compressed according to the scaling when the first video frame is transmitted; and the second adjusting module is used for adjusting the respective first color values of the pixel points of each line in the preset lines to the same color value under the condition that the first video frame needs to be compressed.
Another aspect of the present disclosure provides an electronic device including: a memory having computer-executable instructions stored thereon; and a processor for executing the instructions to perform the video processing method as described above.
Yet another aspect of the present disclosure provides a computer-readable medium storing computer-executable instructions for implementing the video processing method as described above when executed.
Yet another aspect of the disclosure provides a computer program comprising computer executable instructions for implementing the video processing method as described above when executed.
Drawings
For a more complete understanding of the present disclosure and the advantages thereof, reference is now made to the following descriptions taken in conjunction with the accompanying drawings, in which:
fig. 1A schematically illustrates a schematic diagram of a first video frame in a video to be processed according to an embodiment of the present disclosure;
fig. 1B schematically illustrates a second video frame obtained after a first video frame is processed by applying a video processing method or an apparatus thereof according to an embodiment of the present disclosure;
fig. 2 schematically shows a flow chart of a video processing method according to an embodiment of the present disclosure;
fig. 3 schematically illustrates a flowchart of adjusting respective first color values of a plurality of pixel points included in a preset row to respective corresponding second color values according to an embodiment of the present disclosure;
fig. 4 schematically illustrates a flowchart of adjusting a first color value of each of a plurality of pixel points included in a preset row to a corresponding second color value according to another embodiment of the present disclosure;
FIG. 5 schematically illustrates a flow chart for determining a preset line in a first video frame according to an embodiment of the present disclosure;
FIG. 6 schematically shows a flow diagram of a video processing method according to another embodiment of the present disclosure;
fig. 7 schematically shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure;
FIG. 8 schematically illustrates a block diagram of a first adjustment module according to an embodiment of the disclosure;
FIG. 9 schematically illustrates a block diagram of a first adjustment module according to another embodiment of the present disclosure;
FIG. 10 schematically shows a block diagram of a determination module according to an embodiment of the disclosure;
fig. 11 schematically shows a block diagram of a video processing apparatus according to another embodiment of the present disclosure; and
fig. 12 schematically shows a block diagram of an electronic device suitable for implementing the methods of the present disclosure, in accordance with an embodiment of the present disclosure.
Detailed Description
Hereinafter, embodiments of the present disclosure will be described with reference to the accompanying drawings. It should be understood that the description is illustrative only and is not intended to limit the scope of the present disclosure. In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the embodiments of the disclosure. It may be evident, however, that one or more embodiments may be practiced without these specific details. Moreover, in the following description, descriptions of well-known structures and techniques are omitted so as to not unnecessarily obscure the concepts of the present disclosure.
The terminology used herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the disclosure. The terms "comprises," "comprising," and the like, as used herein, specify the presence of stated features, steps, operations, and/or components, but do not preclude the presence or addition of one or more other features, steps, operations, or components.
All terms (including technical and scientific terms) used herein have the same meaning as commonly understood by one of ordinary skill in the art unless otherwise defined. It is noted that the terms used herein should be interpreted as having a meaning that is consistent with the context of this specification and should not be interpreted in an idealized or overly formal sense.
Where a convention analogous to "at least one of A, B and C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B and C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). Where a convention analogous to "A, B or at least one of C, etc." is used, in general such a construction is intended in the sense one having skill in the art would understand the convention (e.g., "a system having at least one of A, B or C" would include but not be limited to systems that have a alone, B alone, C alone, a and B together, a and C together, B and C together, and/or A, B, C together, etc.). It will be further understood by those within the art that virtually any disjunctive word and/or phrase presenting two or more alternative terms, whether in the description, claims, or drawings, should be understood to contemplate the possibilities of including one of the terms, either of the terms, or both terms. For example, the phrase "a or B" should be understood to include the possibility of "a" or "B", or "a and B".
Some block diagrams and/or flow diagrams are shown in the figures. It will be understood that some blocks of the block diagrams and/or flowchart illustrations, or combinations thereof, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the instructions, which execute via the processor, create means for implementing the functions/acts specified in the block diagrams and/or flowchart block or blocks.
Accordingly, the techniques of this disclosure may be implemented in hardware and/or software (including firmware, microcode, etc.). In addition, the techniques of this disclosure may take the form of a computer program product on a computer-readable medium having instructions stored thereon for use by or in connection with an instruction execution system. In the context of this disclosure, a computer-readable medium may be any medium that can contain, store, communicate, propagate, or transport the instructions. For example, the computer readable medium can include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the computer readable medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The embodiment of the disclosure provides a video processing method and a video processing device, wherein the video processing method comprises the steps of acquiring a first video frame; determining a preset line in a first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value; adjusting respective first color values of a plurality of pixel points contained in a preset row to respective corresponding second color values to obtain a second video frame; and replacing the first video frame with a second video frame so that the receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame.
Fig. 1A schematically shows a schematic diagram of a first video frame in a video to be processed according to an embodiment of the present disclosure.
As shown in fig. 1A, the first video frame 101 may represent one of video frames in a video, an application scene of the video may be that mom celebrates birthday for her own daughter at home and makes video with dad (not shown in the figure) of the daughter in real time, and the first video frame 101 may be a picture of mom and daughter calling in the face of the lens and dad. Better video interaction among mom, daughter and dad can be realized through the video transmission technology.
According to the embodiment of the disclosure, in the process of the video, in order to increase the interactive effect of the video, some operations can be performed on the video at certain key moments, for example, when mom and daughter face to make a call with dad, special effects can be used for video frames, and the like, for example, It's a Party! .
Fig. 1B schematically illustrates a second video frame obtained after a first video frame is processed by applying a video processing method or an apparatus thereof according to an embodiment of the present disclosure.
As shown in FIG. 1B, It's a Party!in the second video frame 102! 104 is the effect after the operation on the first video frame 101. According to the embodiment of the disclosure, in order to enable the video receiving end to determine which frame the video frame operated by the sending end is, the color information of the pixel points in the preset line in the first video frame may be changed, so that the video receiving end may determine the second video frame as the target video frame according to the color information of the second video frame after the color value is changed. As shown in fig. 1B, the black line 103 in the second video frame 102 is obtained by changing the color information of the pixel points in the preset line in the first video frame. The second video frame 102 is known to be a video frame operated by the transmitting end through the black line 103.
Through the embodiment of the disclosure, only the color information of the preset line in the first video frame can be changed, so that the receiver can be prompted that the second video frame is the target frame, the video quality is not affected basically, and no image loss exists for other video frames.
It should be noted that fig. 1A and 1B are only examples of scenarios in which the embodiments of the present disclosure may be applied to help those skilled in the art understand the technical content of the present disclosure, but do not mean that the embodiments of the present disclosure may not be used in other devices, systems, environments or scenarios.
Fig. 2 schematically shows a flow chart of a video processing method according to an embodiment of the present disclosure.
As shown in fig. 2, the video processing method includes operations S210 to S240.
In operation S210, a first video frame is acquired.
In operation S220, a preset line in the first video frame is determined, where the preset line includes a plurality of pixels, and each pixel has a respective first color value.
According to the embodiment of the present disclosure, since a video is composed of many still pictures, one still picture may be referred to as one frame of a video. Each frame of video may have a number of lines, each line having a number of pixels. Each pixel point has a respective first color value, wherein the first color values of different pixel points on each line can be the same or different. For example, in the case that the first color values of the pixels on each line are the same, the color information of the line is pure. In the case that the first color values of the pixels on each line are different, the color information of the line may be chromatic.
In operation S230, the first color values of the multiple pixel points included in the preset line are adjusted to the corresponding second color values, so as to obtain a second video frame.
According to an embodiment of the present disclosure, the preset line may be the uppermost or bottommost line or lines of the video frame, and the number of the preset lines may be one line or a few lines in order to reduce the amount of calculation in the video parsing process. According to the embodiment of the present disclosure, after the preset line is determined, the respective first color values of the plurality of pixel points included in the preset line may be adjusted to the respective corresponding second color values, wherein the respective second color values of the plurality of pixel points may be the same or different, and it should be noted that, after the first color values of the plurality of pixel points included in the preset line are changed, the change of the color values in the second video frame needs to be monitored by the video receiver. For example, when the respective first color values of the plurality of pixel points are the same, it is described that the color information of the preset line is a pure color, the first color values of the plurality of pixel points can be adjusted to the respective corresponding second color values, the respective different second color values are different, and at this time, the color information modified by the preset line becomes a color.
In operation S240, the first video frame is replaced with a second video frame so that the receiving end can determine the second video frame as a target video frame according to color information of a preset line in the second video frame.
According to the embodiment of the disclosure, since the color information of the preset line in the second video frame is changed, the receiving end can determine the second video frame as the target video frame according to the color information of the preset line in the second video frame.
Through the embodiment of the disclosure, only the color information of the preset line in the first video frame can be changed, so that the second video frame of the receiver can be prompted as the target frame, the video quality is not affected basically, no image loss exists for other video frames, and meanwhile, because only the color information of the preset line in the first video frame is changed, the calculation amount of a processor is small when the video frame is analyzed, the fluency of the video is not affected, and the user experience is improved.
The method shown in fig. 2 is further described with reference to fig. 3-6 in conjunction with specific embodiments.
Fig. 3 schematically illustrates a flowchart of adjusting respective first color values of a plurality of pixel points included in a preset row to respective corresponding second color values according to an embodiment of the present disclosure.
As shown in fig. 3, adjusting the first color value of each of the plurality of pixel points included in the preset row to the corresponding second color value includes operations S231 to S233.
In operation S231, a plurality of pixel points included in a preset row are divided into N groups of pixel groups, where each group of pixel groups in the N groups of pixel groups includes one or more pixel points.
In operation S232, respective target color information is determined for each of the N sets of pixel sets, respectively.
According to the embodiment of the present disclosure, for example, the plurality of pixel points included in the preset row are divided into 3 groups of pixel groups, and each group of pixel groups respectively determines that the target color information of each pixel group may be that the target color information of the pixel group 1 is red, the target color information of the pixel group 2 is yellow, and the target color information of the pixel group 3 is blue.
According to the embodiment of the present disclosure, the size of N may be determined according to actual conditions, and N may be determined as a numerical value with a low fault tolerance rate with reference to the fault tolerance rate of video transmission, so that a person skilled in the art may further determine according to the related art.
In operation S233, the first color values of one or more pixels in each group of pixel groups are adjusted to the corresponding second color values according to the target color information corresponding to each group of pixel groups.
According to an embodiment of the present disclosure, for example, according to the target color information red of the pixel group 1, the first color value of one or more pixel points in the pixel group 1 is adjusted to the second color value characterizing red. And adjusting the first color value of one or more pixel points in the pixel group 2 into a second color value representing yellow according to the target color information yellow of the pixel group 2. And adjusting the first color value of one or more pixel points in the pixel group 3 to be a second color value representing blue according to the target color information blue of the pixel group 3.
Through this disclosed embodiment, can be grouped a plurality of pixel, the colour value of the pixel in every pixel group can be transferred into a colour value, and different pixel groups can be transferred into different colour values. The lines in the first video frame can be adjusted to be colored, and the receiver can be more remarkably prompted that the second video frame is the target video frame.
Fig. 4 schematically illustrates a flowchart of adjusting a first color value of each of a plurality of pixel points included in a preset row to a corresponding second color value according to another embodiment of the present disclosure.
As shown in fig. 4, adjusting the first color value of each of the plurality of pixel points included in the preset row to the corresponding second color value includes operations S234 to S239.
In operation S234, a plurality of pixel points included in a preset row are divided into N groups of pixel groups, where each group of pixel groups in the N groups of pixel groups includes one or more pixel points.
In operation S235, M groups of pixel groups are determined from the N groups of pixel groups, where M is an integer less than N.
In operation S236, respective target color information is determined for each of the N groups of pixel groups except for the M groups of pixel groups, respectively.
According to the embodiment of the present disclosure, for example, a plurality of pixel points included in a preset line are divided into 10 groups of pixel groups, and 3 groups of pixel groups are determined from the 10 groups of pixel groups. Respective target color information may be determined for each of the 10 sets of pixels other than the determined 3 sets of pixels, respectively. For example, 10 pixel groups are numbered as pixel group 1, pixel group 2 … pixel group 10, and the identified M pixel groups can be pixel group 5, pixel group 6, and pixel group 7.
Therefore, it is necessary to determine respective target color information for the pixel group 1, the pixel group 2, the pixel group 3, the pixel group 4, the pixel group 8, the pixel group 9, and the pixel group 10.
In operation S237, the respective first color values of the pixels in each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups are adjusted to the corresponding second color values according to the respective target color information of each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups.
In operation S238, an operation instruction is obtained, where the operation instruction is used to adjust a presentation effect of the first video frame.
According to the embodiment of the present disclosure, the types of the operation instructions include various types, for example, the operation instructions may be operation instructions for adding special effects, and for example, the operation instructions may also be operation instructions for performing a pause operation on a video. Taking the operation instruction for adding the special effect as an example, adjusting the display effect of the first video frame according to the operation instruction may be to make the video have the special display effect.
In operation S239, the first color values of the pixels in each of the M groups of pixel groups are adjusted to the values corresponding to the operation instructions.
According to the embodiment of the disclosure, taking the above-determined M groups of pixel groups, which may be the pixel group 5, the pixel group 6, and the pixel group 7, as an example, each of the pixel groups 5, 6, and 7 includes one or more pixel points, each of the pixel points has a corresponding first color value, and in order to display the information of the operation instruction through a preset line, the first color values of the pixel points in the pixel groups 5, 6, and 7 may be adjusted to values corresponding to the operation instruction.
According to an embodiment of the present disclosure, each pixel group may be taken as a whole, each whole may be filled with the same data, and the data structure may be: pixel group 1, pixel group 2, pixel group 3, pixel group 4, pixel group 5, pixel group 6, pixel group 7, pixel group 8, pixel group 9, and pixel group 10. According to the embodiment of the disclosure, each group of pixel groups except for M groups of pixel groups in N groups of pixel groups may also be used as a pixel group filled with a check code, and in the process of video transmission, data in the pixel group 1, the pixel group 2, the pixel group 3, the pixel group 4, the pixel group 8, the pixel group 9, and the pixel group 10 are used as a pre-and-post check code, and the check code may be used to determine whether an analyzed video frame is a target video frame, and in the case of yes, related information may be extracted from the preset line quickly. And the data in pixel group 5, pixel group 6, pixel group 7 may be used as intermediate information.
By the embodiment of the disclosure, the operation instruction of the user can be filled into the pixel points in the determined M groups of pixel groups, so that the video receiving end can directly analyze the video data of the line, the relevant information of the sender operation is quickly extracted, and the user experience is improved.
Fig. 5 schematically shows a flow chart for determining a preset line in a first video frame according to an embodiment of the present disclosure.
As shown in fig. 5, determining the preset line in the first video frame includes operations S221 to S221.
In operation S221, a scaling at which the first video frame is transmitted is determined.
In operation S222, the number of preset lines is determined according to the scaling of the first video frame.
According to an embodiment of the present disclosure, for example, if the scaling ratio when the first video frame is transmitted is 80%, the number of preset lines may be determined to be 3 lines, and if the scaling ratio when the first video frame is transmitted is 100%, the number of preset lines may be determined to be 1 line. The scaling when the first video frame is transmitted is 120%, the number of preset lines may be determined to be 1 line.
According to the embodiment of the disclosure, since the video is transmitted to adapt to display screens with different resolutions, or to meet the video transmission requirement, the first video frame may be scaled to some extent. In order not to affect the display effect of the preset lines in the case that the first video frame is compressed and data in the preset lines may be lost, the number of the preset lines may be determined according to the scaling of the first video frame.
Fig. 6 schematically shows a flow chart of a video processing method according to another embodiment of the present disclosure.
As shown in fig. 6, in the case where the number of the preset lines is determined to be plural according to the scaling when the first video frame is transmitted, the video processing method further includes operations S250 to S260.
In operation S250, it is determined whether the first video frame needs to be compressed according to a scaling at the time when the first video frame is transmitted.
In operation S260, under the condition that the first video frame needs to be compressed, the respective first color values of the pixels in each of the plurality of preset lines are all adjusted to the same color value.
According to the embodiment of the present disclosure, for example, when the first video frame is transmitted, the scaling ratio is 80%, and the number of the preset lines is determined to be plural, it is determined that the first video frame needs to be compressed, and the respective first color values of the pixel points in each line of the plural preset lines can be adjusted to the same color value.
According to the embodiment of the disclosure, since data may be lost when video compression is performed, changing the number of the preset lines can prevent color information of the modified preset lines from being seriously lost. In addition, under the condition that the first video frame needs to be compressed, the modified lines may be hidden or disappear, so that the color information of the preset lines is adjusted to be the same color, that is, the respective first color values of the pixel points of each line in the preset lines are adjusted to be the same color values, and the receiving end only needs to pay attention to the color to determine whether the modified lines are the target video frame.
Fig. 7 schematically shows a block diagram of a video processing apparatus according to an embodiment of the present disclosure.
As shown in fig. 7, the video processing apparatus 300 includes an obtaining module 310, a determining module 320, a first adjusting module 330, and a replacing module 340.
The obtaining module 310 is configured to obtain a first video frame.
The determining module 320 is configured to determine a preset line in the first video frame, where the preset line includes a plurality of pixel points, and each pixel point has a respective first color value.
The first adjusting module 330 is configured to adjust a first color value of each of a plurality of pixel points included in a preset line to a corresponding second color value, so as to obtain a second video frame.
The replacing module 340 is configured to replace the first video frame with a second video frame, so that the receiving end can determine the second video frame as the target video frame according to the color information of the preset line in the second video frame.
Through the embodiment of the disclosure, only the color information of the preset line in the first video frame can be changed, so that the second video frame of the receiver can be prompted as the target frame, the video quality is not affected basically, no image loss exists for other video frames, and meanwhile, because only the color information of the preset line in the first video frame is changed, the calculation amount of a processor is small when the video frame is analyzed, the fluency of the video is not affected, and the user experience is improved.
Fig. 8 schematically illustrates a block diagram of a first adjustment module according to an embodiment of the disclosure.
As shown in fig. 8, according to an embodiment of the present disclosure, the first adjustment module 330 includes a first classification unit 331, a first determination unit 332, and a first adjustment unit 333.
The first classification unit 331 is configured to classify a plurality of pixel points included in a preset row into N groups of pixel groups, where each of the N groups of pixel groups includes one or more pixel points.
The first determining unit 332 is configured to determine respective target color information for each of the N groups of pixel groups.
The first adjusting unit 333 is configured to adjust the first color values of one or more pixels in each group of pixel groups to the respective corresponding second color values according to the target color information corresponding to each group of pixel groups.
Through this disclosed embodiment, can be grouped a plurality of pixel, the colour value of the pixel in every pixel group can be transferred into a colour value, and different pixel groups can be transferred into different colour values. The lines in the first video frame can be adjusted to be colored, and the receiver can be more remarkably prompted that the second video frame is the target video frame.
Fig. 9 schematically illustrates a block diagram of a first adjustment module according to another embodiment of the present disclosure.
As shown in fig. 9, according to an embodiment of the present disclosure, the first adjusting module 330 includes a second classifying unit 334, a second determining unit 335, a third determining unit 336, a second adjusting unit 337, an obtaining unit 338, and a third adjusting unit 339.
The second classification unit 334 is configured to classify a plurality of pixel points included in a preset row into N groups of pixel groups, where each group of pixel groups in the N groups of pixel groups includes one or more pixel points.
The second determining unit 335 is configured to determine M groups of pixel groups from the N groups of pixel groups, where M is an integer smaller than N.
The third determining unit 336 is configured to determine respective target color information for each of the N groups of pixel groups except for the M groups of pixel groups.
The second adjusting unit 337 is configured to adjust the respective first color values of the pixels in each of the groups of pixels except the M groups of pixels in the N groups of pixel groups to corresponding second color values according to the respective target color information of each of the groups of pixels except the M groups of pixel groups in the N groups of pixel groups.
The obtaining unit 338 is configured to obtain an operation instruction, where the operation instruction is used to adjust a presentation effect of the first video frame.
The third adjusting unit 339 is configured to adjust the first color value of the pixel point in each of the M groups of pixel groups to a value corresponding to the operation instruction.
By the embodiment of the disclosure, the operation instruction of the user can be filled into the pixel points in the determined M groups of pixel groups, so that the video receiving end can directly analyze the video data of the line, the relevant information of the sender operation is quickly extracted, and the user experience is improved.
Fig. 10 schematically illustrates a block diagram of a determination module according to an embodiment of the present disclosure.
As shown in fig. 10, the determining module 320 includes a fourth determining unit 321 and a fifth determining unit 322 according to an embodiment of the present disclosure.
The fourth determination unit 321 is configured to determine a scaling when the first video frame is transmitted.
The fifth determining unit 322 is configured to determine the number of the preset lines according to the scaling of the first video frame.
According to the embodiment of the disclosure, since the video is transmitted to adapt to display screens with different resolutions, or to meet the video transmission requirement, the first video frame may be scaled to some extent. In order not to affect the display effect of the preset lines in the case that the first video frame is compressed and data in the preset lines may be lost, the number of the preset lines may be determined according to the scaling of the first video frame.
Fig. 11 schematically shows a block diagram of a video processing apparatus according to another embodiment of the present disclosure.
As shown in fig. 11, according to an embodiment of the present disclosure, in the case where the number of preset lines is determined to be plural according to the scaling when the first video frame is transmitted, the video processing apparatus 300 includes a judgment module 350 and a second adjustment module 360 in addition to the acquisition module 310, the determination module 320, the first adjustment module 330, and the replacement module 340.
The determining module 350 is configured to determine whether the first video frame needs to be compressed according to a scaling of the first video frame when the first video frame is transmitted.
The second adjusting module 360 is configured to adjust the respective first color values of the pixels in each of the plurality of preset lines to the same color value when the first video frame needs to be compressed.
According to the embodiment of the disclosure, since data may be lost when video compression is performed, changing the number of the preset lines can prevent color information of the modified preset lines from being seriously lost. In addition, under the condition that the first video frame needs to be compressed, the modified lines may be hidden or disappear, so that the color information of the preset lines is adjusted to be the same color, that is, the respective first color values of the pixel points of each line in the preset lines are adjusted to be the same color values, and the receiving end only needs to pay attention to the color to determine whether the modified lines are the target video frame.
Any number of modules, sub-modules, units, sub-units, or at least part of the functionality of any number thereof according to embodiments of the present disclosure may be implemented in one module. Any one or more of the modules, sub-modules, units, and sub-units according to the embodiments of the present disclosure may be implemented by being split into a plurality of modules. Any one or more of the modules, sub-modules, units, sub-units according to embodiments of the present disclosure may be implemented at least in part as a hardware circuit, such as a Field Programmable Gate Array (FPGA), a Programmable Logic Array (PLA), a system on a chip, a system on a substrate, a system on a package, an Application Specific Integrated Circuit (ASIC), or may be implemented in any other reasonable manner of hardware or firmware by integrating or packaging a circuit, or in any one of or a suitable combination of software, hardware, and firmware implementations. Alternatively, one or more of the modules, sub-modules, units, sub-units according to embodiments of the disclosure may be at least partially implemented as a computer program module, which when executed may perform the corresponding functions.
For example, any plurality of the acquisition module 310, the determination module 320, the first adjustment module 330, the replacement module 340, the judgment module 350, the second adjustment module 360, the fourth determination unit 321, the fifth determination unit 322, the first classification unit 331, the first determination unit 332, the first adjustment unit 333, the second classification unit 334, the second determination unit 335, the third determination unit 336, the second adjustment unit 337, the acquisition unit 338, and the third adjustment unit 339 may be combined in one module or unit to be implemented, or any one of them may be split into a plurality of modules or units. Alternatively, at least part of the functionality of one or more of these modules may be combined with at least part of the functionality of the other modules and implemented in one module. According to an embodiment of the present disclosure, at least one of the obtaining module 310, the determining module 320, the first adjusting module 330, the replacing module 340, the judging module 350, the second adjusting module 360, the fourth determining unit 321, the fifth determining unit 322, the first classifying unit 331, the first determining unit 332 and the first adjusting unit 333, the second classifying unit 334, the second determining unit 335, the third determining unit 336, the second adjusting unit 337, the obtaining unit 338 and the third adjusting unit 339 may be at least partially implemented as a hardware circuit, such as Field Programmable Gate Arrays (FPGAs), Programmable Logic Arrays (PLAs), systems on a chip, systems on a substrate, systems on a package, Application Specific Integrated Circuits (ASICs), or may be implemented in hardware or firmware in any other reasonable way of integrating or packaging circuits, or in any one of three implementations, software, hardware and firmware, or in any suitable combination of any of them. Alternatively, at least one of the obtaining module 310, the determining module 320, the first adjusting module 330, the replacing module 340, the judging module 350, the second adjusting module 360, the fourth determining unit 321, the fifth determining unit 322, the first classifying unit 331, the first determining unit 332 and the first adjusting unit 333, the second classifying unit 334, the second determining unit 335, the third determining unit 336, the second adjusting unit 337, the obtaining unit 338 and the third adjusting unit 339 may be at least partially implemented as a computer program module, and when the computer program module is executed, the corresponding function may be performed.
Another aspect of the present disclosure provides an electronic device including: a memory having computer-executable instructions stored thereon; and a processor for executing the instructions to perform the video processing method as described above.
Fig. 12 schematically shows a block diagram of an electronic device suitable for implementing the methods of the present disclosure, in accordance with an embodiment of the present disclosure. The electronic device shown in fig. 12 is only an example, and should not bring any limitation to the functions and the scope of use of the embodiments of the present disclosure.
As shown in fig. 12, electronic device 400 includes a processor 410, a computer-readable storage medium 420. The electronic device 400 may perform a method according to an embodiment of the present disclosure.
In particular, processor 410 may include, for example, a general purpose microprocessor, an instruction set processor and/or related chip set and/or a special purpose microprocessor (e.g., an Application Specific Integrated Circuit (ASIC)), and/or the like. The processor 410 may also include onboard memory for caching purposes. Processor 410 may be a single processing unit or a plurality of processing units for performing different actions of a method flow according to embodiments of the disclosure.
Computer-readable storage medium 420 may be, for example, any medium that can contain, store, communicate, propagate, or transport the instructions. For example, a readable storage medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, device, or propagation medium. Specific examples of the readable storage medium include: magnetic storage devices, such as magnetic tape or Hard Disk Drives (HDDs); optical storage devices, such as compact disks (CD-ROMs); a memory, such as a Random Access Memory (RAM) or a flash memory; and/or wired/wireless communication links.
The computer-readable storage medium 420 may comprise a computer program 421, which computer program 421 may comprise code/computer-executable instructions that, when executed by the processor 410, cause the processor 410 to perform a method according to an embodiment of the disclosure, or any variant thereof.
The computer program 421 may be configured with, for example, computer program code comprising computer program modules. For example, in an example embodiment, code in computer program 421 may include one or more program modules, including for example 421A, modules 421B, … …. It should be noted that the division and number of the modules are not fixed, and those skilled in the art may use suitable program modules or program module combinations according to actual situations, so that the processor 410 may execute the method according to the embodiment of the present disclosure or any variation thereof when the program modules are executed by the processor 410.
According to an embodiment of the present invention, at least one of the obtaining module 310, the determining module 320, the first adjusting module 330, the replacing module 340, the judging module 350, the second adjusting module 360, the fourth determining unit 321, the fifth determining unit 322, the first classifying unit 331, the first determining unit 332 and the first adjusting unit 333, the second classifying unit 334, the second determining unit 335, the third determining unit 336, the second adjusting unit 337, the obtaining unit 338 and the third adjusting unit 339 may be implemented as a computer program module described with reference to fig. 12, which, when being executed by the processor 410, may implement the corresponding operations described above.
Yet another aspect of the present disclosure provides a computer-readable medium storing computer-executable instructions for implementing the video processing method as described above when executed.
The computer readable medium may be embodied in the apparatus/device/system described in the above embodiments; or may exist separately and not be assembled into the device/apparatus/system.
According to embodiments of the present disclosure, a computer readable medium may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In the present disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In contrast, in the present disclosure, a computer-readable signal medium may include a propagated data signal with computer-readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: wireless, wired, optical fiber cable, radio frequency signals, etc., or any suitable combination of the foregoing.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams or flowchart illustration, and combinations of blocks in the block diagrams or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
Those skilled in the art will appreciate that various combinations and/or combinations of features recited in the various embodiments and/or claims of the present disclosure can be made, even if such combinations or combinations are not expressly recited in the present disclosure. In particular, various combinations and/or combinations of the features recited in the various embodiments and/or claims of the present disclosure may be made without departing from the spirit or teaching of the present disclosure. All such combinations and/or associations are within the scope of the present disclosure.
While the disclosure has been shown and described with reference to certain exemplary embodiments thereof, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the disclosure as defined by the appended claims and their equivalents. Accordingly, the scope of the present disclosure should not be limited to the above-described embodiments, but should be defined not only by the appended claims, but also by equivalents thereof.

Claims (9)

1. A video processing method, comprising:
acquiring a first video frame;
determining a preset line in the first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value;
adjusting respective first color values of the plurality of pixel points included in the preset row to respective corresponding second color values to obtain a second video frame; and
replacing the first video frame with the second video frame so that a receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame;
determining a preset line in the first video frame comprises:
determining a scaling at which the first video frame is transmitted; and
determining the number of the preset lines according to the scaling of the first video frame;
the determining the number of the preset lines according to the scaling of the first video frame comprises: if the first video frame is compressed in size, increasing the number of preset lines according to the compression ratio of the first video frame; if the first video frame size is not changed or enlarged, the number of preset lines is not changed.
2. The method of claim 1, wherein adjusting the first color value of each of the plurality of pixels included in the predetermined row to the corresponding second color value comprises:
dividing the plurality of pixel points included in the preset row into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups includes one or more pixel points;
determining respective target color information for each of the N groups of pixel groups; and
and adjusting the first color value of one or more pixel points in each group of pixel groups to be the corresponding second color value according to the target color information corresponding to each group of pixel groups.
3. The method of claim 1, wherein adjusting the first color value of each of the plurality of pixels included in the predetermined row to the corresponding second color value comprises:
dividing the plurality of pixel points included in the preset row into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups includes one or more pixel points;
determining M groups of pixel groups from the N groups of pixel groups, wherein M is an integer less than N;
respectively determining respective target color information for each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups;
adjusting respective first color values of pixel points in each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups to corresponding second color values according to respective target color information of each group of pixel groups except the M groups of pixel groups in the N groups of pixel groups;
acquiring an operation instruction, wherein the operation instruction is used for adjusting the display effect of the first video frame; and
and adjusting the first color value of the pixel point in each group of pixel groups in the M groups of pixel groups to a numerical value corresponding to the operation instruction.
4. The method of claim 1, wherein in case the number of the preset lines is determined to be plural according to the scaling when the first video frame is transmitted, the method further comprises:
judging whether the first video frame needs to be compressed according to the scaling when the first video frame is transmitted; and
and under the condition that the first video frame needs to be compressed, adjusting the respective first color values of the pixel points of each line in the preset lines into the same color value.
5. A video processing apparatus comprising:
the acquisition module is used for acquiring a first video frame;
the determining module is used for determining a preset line in the first video frame, wherein the preset line comprises a plurality of pixel points, and each pixel point has a respective first color value;
the first adjusting module is used for adjusting respective first color values of the plurality of pixel points contained in the preset line to respective corresponding second color values to obtain a second video frame; and
a replacing module, configured to replace the first video frame with the second video frame, so that a receiving end can determine the second video frame as a target video frame according to the color information of the preset line in the second video frame;
the determining module comprises:
a fourth determining unit for determining a scaling when the first video frame is transmitted; and
a fifth determining unit, configured to determine the number of preset lines according to the scaling of the first video frame;
the fifth determining unit is specifically configured to increase the number of preset lines according to a compression ratio of the first video frame if the size of the first video frame is compressed; if the first video frame size is not changed or enlarged, the number of preset lines is not changed.
6. The apparatus of claim 5, wherein the first adjustment module comprises:
the first classification unit is used for classifying the plurality of pixel points contained in the preset row into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups contains one or more pixel points;
a first determining unit, configured to determine respective target color information for each of the N groups of pixel groups, respectively; and
and the first adjusting unit is used for adjusting the first color values of one or more pixel points in each group of pixel groups into the corresponding second color values according to the target color information corresponding to each group of pixel groups.
7. The apparatus of claim 5, wherein the first adjustment module comprises:
the second classification unit is used for classifying the plurality of pixel points contained in the preset line into N groups of pixel groups, wherein each group of pixel groups in the N groups of pixel groups contains one or more pixel points;
a second determining unit, configured to determine M groups of pixel groups from the N groups of pixel groups, where M is an integer smaller than N;
a third determining unit, configured to determine respective target color information for each of the N groups of pixel groups except the M groups of pixel groups;
a second adjusting unit, configured to adjust, according to respective target color information of each of the N groups of pixel groups except the M groups of pixel groups, respective first color values of pixels in each of the N groups of pixel groups except the M groups of pixel groups to corresponding second color values;
the acquisition unit is used for acquiring an operation instruction, wherein the operation instruction is used for adjusting the display effect of the first video frame; and
and the third adjusting unit is used for adjusting the first color value of the pixel point in each group of pixel groups in the M groups of pixel groups to a numerical value corresponding to the operation instruction.
8. An electronic device, comprising:
a memory having computer-executable instructions stored thereon; and
a processor for executing the instructions to perform the video processing method according to any one of claims 1 to 4.
9. A computer-readable medium storing computer-executable instructions for implementing the video processing method of any of claims 1-4 when executed.
CN201810216232.9A 2018-03-15 2018-03-15 Video processing method and device, electronic equipment and computer readable medium Active CN108235119B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810216232.9A CN108235119B (en) 2018-03-15 2018-03-15 Video processing method and device, electronic equipment and computer readable medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810216232.9A CN108235119B (en) 2018-03-15 2018-03-15 Video processing method and device, electronic equipment and computer readable medium

Publications (2)

Publication Number Publication Date
CN108235119A CN108235119A (en) 2018-06-29
CN108235119B true CN108235119B (en) 2021-02-19

Family

ID=62658558

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810216232.9A Active CN108235119B (en) 2018-03-15 2018-03-15 Video processing method and device, electronic equipment and computer readable medium

Country Status (1)

Country Link
CN (1) CN108235119B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108900904B (en) * 2018-07-27 2021-10-15 北京市商汤科技开发有限公司 Video processing method and device, electronic equipment and storage medium
CN111340921A (en) * 2018-12-18 2020-06-26 北京京东尚科信息技术有限公司 Dyeing method, dyeing apparatus, computer system and medium
CN112651056B (en) * 2019-10-11 2024-05-31 中国信息通信研究院 Anti-screenshot display method, device and system

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2788973A1 (en) * 2011-12-08 2014-10-15 Dolby Laboratories Licensing Corporation Mapping for display emulation based on image characteristics
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
US9521377B2 (en) * 2013-10-08 2016-12-13 Sercomm Corporation Motion detection method and device using the same
CN106412710A (en) * 2016-09-13 2017-02-15 北京小米移动软件有限公司 Method and device for exchanging information through graphical label in live video streaming
CN106570816A (en) * 2016-10-31 2017-04-19 努比亚技术有限公司 Method and device for sending and receiving information

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7113210B2 (en) * 2002-05-08 2006-09-26 Hewlett-Packard Development Company, L.P. Incorporating pixel replacement for negative values arising in dark frame subtraction
US8347344B2 (en) * 2008-12-18 2013-01-01 Vmware, Inc. Measuring remote video playback performance with embedded encoded pixels
CN102833490A (en) * 2011-06-15 2012-12-19 新诺亚舟科技(深圳)有限公司 Method and system for editing and playing interactive video, and electronic learning device
CN102663375B (en) * 2012-05-08 2014-02-19 合肥工业大学 Active target identification method based on digital watermark technology in H.264
EP2982131B1 (en) * 2013-03-15 2019-05-08 Inscape Data, Inc. Systems and methods for real-time television ad detection using an automated content recognition database
US9832338B2 (en) * 2015-03-06 2017-11-28 Intel Corporation Conveyance of hidden image data between output panel and digital camera
CN107566837B (en) * 2017-08-30 2019-05-07 苏州科达科技股份有限公司 The time labeling method and system of video

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP2788973A1 (en) * 2011-12-08 2014-10-15 Dolby Laboratories Licensing Corporation Mapping for display emulation based on image characteristics
US9521377B2 (en) * 2013-10-08 2016-12-13 Sercomm Corporation Motion detection method and device using the same
CN105631446A (en) * 2015-12-17 2016-06-01 天脉聚源(北京)科技有限公司 Method and device for determining interactive corner mark prompt
CN106412710A (en) * 2016-09-13 2017-02-15 北京小米移动软件有限公司 Method and device for exchanging information through graphical label in live video streaming
CN106570816A (en) * 2016-10-31 2017-04-19 努比亚技术有限公司 Method and device for sending and receiving information

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
"基于H.264的安全型和内容级认证的视频水印算法";唐银;《中国优秀硕士学位论文全文数据库》;20170515(第5期);全文 *

Also Published As

Publication number Publication date
CN108235119A (en) 2018-06-29

Similar Documents

Publication Publication Date Title
US10692465B2 (en) Transitioning between video priority and graphics priority
JP6595006B2 (en) Low latency screen mirroring
US10511803B2 (en) Video signal transmission method and device
CN108235119B (en) Video processing method and device, electronic equipment and computer readable medium
CN107426606B (en) Screen recording method and device, electronic equipment and system
CN111327959A (en) Video frame insertion method and related device
CN112437345B (en) Video double-speed playing method and device, electronic equipment and storage medium
KR102617258B1 (en) Image processing method and apparatus
US11122245B2 (en) Display apparatus, method for controlling the same and image providing apparatus
US11627369B2 (en) Video enhancement control method, device, electronic device, and storage medium
US20220382053A1 (en) Image processing method and apparatus for head-mounted display device as well as electronic device
GB2609529A (en) High-speed real-time data transmission method and apparatus, device, and storage medium
CN109862019B (en) Data processing method, device and system
US8994789B2 (en) Digital video signal, a method for encoding of a digital video signal and a digital video signal encoder
CN106412718A (en) Rendering method and device for subtitles in 3D space
CN115665502B (en) Video data processing method, injection method, system, equipment and storage medium
US10764578B2 (en) Bit rate optimization system and method
US20200311947A1 (en) Image processing apparatus, transmission method, and storage medium
CN110381308A (en) A kind of system for testing live video treatment effect
KR102331537B1 (en) Apparatus and method for decoding
US11908340B2 (en) Magnification enhancement of video for visually impaired viewers
US20240021216A1 (en) Automation of Media Content Playback
US10650240B2 (en) Movie content rating
CN104811659A (en) Intelligent hollowing method for image stacking based on optical fiber distributed display system
CN113542760A (en) Video transmission method, video playing equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant