CN112969080B - Image processing method, system, equipment and storage medium - Google Patents

Image processing method, system, equipment and storage medium Download PDF

Info

Publication number
CN112969080B
CN112969080B CN202110205015.1A CN202110205015A CN112969080B CN 112969080 B CN112969080 B CN 112969080B CN 202110205015 A CN202110205015 A CN 202110205015A CN 112969080 B CN112969080 B CN 112969080B
Authority
CN
China
Prior art keywords
frame
color
inverse
inverse color
parameters
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110205015.1A
Other languages
Chinese (zh)
Other versions
CN112969080A (en
Inventor
刘桂华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Xiamen Attiot Intelligent Technology Co ltd
Original Assignee
Xiamen Attiot Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Xiamen Attiot Intelligent Technology Co ltd filed Critical Xiamen Attiot Intelligent Technology Co ltd
Priority to CN202110205015.1A priority Critical patent/CN112969080B/en
Publication of CN112969080A publication Critical patent/CN112969080A/en
Application granted granted Critical
Publication of CN112969080B publication Critical patent/CN112969080B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/2347Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving video stream encryption
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44012Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving rendering scenes according to scene graphs, e.g. MPEG-4 scene graphs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

The application relates to an image processing method, an image processing system, an image processing device and a storage medium, wherein the image processing method comprises the following steps: determining a primary color frame and determining an inverse color frame of the primary color frame based on the primary color frame; determining a plurality of groups of inverse color parameters based on the inverse color frame, wherein the inverse color frame is obtained after the inverse color parameters of all groups are overlapped; acquiring a video frame code stream to be processed, and inserting a primary color frame into the video frame code stream; determining a video frame between two primary color frames, and processing the video frame between the two primary color frames based on the inverse color parameters; and outputting the processed video frame code stream to a display device for display. After the method, the device and the computer equipment provided by the embodiments of the application are applied to image processing, the recorded video can be obviously distinguished from the original video in the color gamut, so that the pirated video is prevented to a certain extent.

Description

Image processing method, system, equipment and storage medium
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, system, device, and storage medium.
Background
Electronic devices greatly facilitate people's lives, but sometimes also bring about some negative effects. For example, in the period that electronic equipment is still underdeveloped, pirated movies mainly appear in the form of discs, and because the production of the discs requires time, only the film needs to be stored, so that the film box office is not affected. Along with the progress of electronic equipment, film pictures are recorded through the electronic equipment, so that electronic burglary prevention becomes extremely simple, the whole film can be completely recorded by only using a small bracket and shooting equipment (such as a smart phone), and higher definition can be ensured as long as the posture is adjusted well. Workers cannot always patrol the movie theatre because this would greatly affect the viewing experience.
Therefore, in the modern movie market, the main anti-theft is aimed at electronic anti-theft, and the electronic sources of movies are generally two, namely, copy leakage for cinema and recording for film viewers. The former has very low probability, so that the latter is mainly prevented, but because the electronic equipment has more personal privacy, the electronic equipment cannot be taken into the electronic equipment in the process of watching the video, and thus, the defense means for electronic theft prevention is still lacking at present.
Disclosure of Invention
Based on this, it is necessary to provide an image processing method, system, apparatus and storage medium for the problem that the defense means against electronic theft is still lacking at present.
A first aspect of the present application provides an image processing method, including:
determining a primary color frame and determining an inverse color frame of the primary color frame based on the primary color frame;
determining a plurality of groups of inverse color parameters based on the inverse color frame, wherein the inverse color frame is obtained after the inverse color parameters of all groups are overlapped;
acquiring a video frame code stream to be processed, and inserting a primary color frame into the video frame code stream;
determining a video frame between two primary color frames, and processing the video frame between the two primary color frames based on the inverse color parameters;
and outputting the processed video frame code stream to a display device for display.
In one embodiment, among a plurality of sets of inverse color parameters determined based on the inverse color frame, when determining the inverse color parameters, a set of inverse color parameters is obtained on the basis of all pixel points of the inverse color frame in units of the inverse color frame.
In one embodiment, a set of inverse color parameters is obtained on the basis of all pixels of the inverse color frame, specifically:
determining a set of first weight values;
a set of inverse color parameters is determined based on the color gamut parameters of the inverse color frame and a first weight value.
In one embodiment, the steps of determining a plurality of sets of inverse color parameters based on the inverse color frame specifically include:
dividing the inverse color frame into a plurality of areas, and respectively determining a group of second weight values corresponding to each area;
determining a set of inverse color parameters according to the color gamut parameters of the inverse color frame and the plurality of second weight values; and determining all display areas of the inverse color frame by adding areas corresponding to a plurality of second weight values of a group of inverse color parameters.
In one embodiment, the inserting the primary color frames into the video frame code stream is specifically:
the primary color frames are inserted according to a preset period T, wherein T is larger than the number of the inverse color parameters.
In one embodiment, the determining a video frame between two primary color frames and processing the video frame between two primary color frames based on the inverse color parameter specifically includes:
determining a video frame between two primary color frames;
each video frame corresponds to one or more groups of anti-color parameters, and the anti-color parameters corresponding to the video frame are utilized to process the video frame.
In one embodiment, the determining a video frame between two primary color frames and processing the video frame between two primary color frames based on the inverse color parameter specifically includes:
determining a video frame between two primary color frames;
if the number of the video frames between the two primary color frames is greater than or equal to the number of the anti-color parameters, each group of anti-color parameters corresponds to one video frame, and the video frame is processed by utilizing the group of anti-color parameters.
A second aspect of the present application provides an image processing system, comprising:
a color reversal determining section for determining a primary color frame and determining a color reversal frame of the primary color frame based on the primary color frame;
a parameter determining unit for determining a plurality of sets of inverse color parameters based on the inverse color frame;
the inserter is used for acquiring a video frame code stream to be processed and inserting the primary color frames into the video frame code stream;
an image processor for determining a video frame between two primary color frames and processing the video frame between the two primary color frames based on a color reversal parameter; and
and the output component is used for outputting the processed video frame code stream to a display device for display.
A third aspect of the present application provides a computer device comprising: a processor; a memory for storing executable instructions of the processor; the processor is configured to perform the steps of the above method via execution of the executable instructions.
A fourth aspect of the present application provides a machine readable storage medium having stored thereon a computer program which when executed by a processor performs the steps of the above method.
The beneficial effects are that:
after the method, the system and the computer equipment provided by the embodiments of the application are applied to image processing, the recorded video can be obviously distinguished from the original video in the color gamut, on one hand, the method, the system and the computer equipment are favorable for distinguishing pirated video from recorded video, enabling recorded definition to be higher, being easy to distinguish, being favorable for people with legal consciousness to reject the pirated video, on the other hand, the color gamut of the recorded video can be uncoordinated through the processing in the color gamut, being unfavorable for long-term watching of eyes, and objectively preventing continuous watching.
Drawings
FIG. 1 is a flow chart of an image processing method according to an embodiment of the present application;
FIG. 2 is a process diagram of an image processing method according to an embodiment of the present application;
FIG. 3 is a flowchart of an image processing method according to another embodiment of the present application;
FIG. 4 is a process diagram of an image processing method according to another embodiment of the present application;
FIG. 5 is a flowchart of an image processing method according to another embodiment of the present application;
FIG. 6 is a process diagram of an image processing method according to another embodiment of the present application;
FIG. 7 is a flowchart of an image processing method according to another embodiment of the present application;
FIG. 8 is a process diagram of an image processing method according to another embodiment of the present application;
FIG. 9 is a flowchart of an image processing method according to another embodiment of the present application;
fig. 10 is a schematic view of an application scenario of an image processing method according to another embodiment of the present application;
FIG. 11 is a system architecture diagram of an image processing system according to an embodiment of the present application;
FIG. 12 is a system architecture diagram of an image processing system according to one embodiment of the present application;
FIG. 13 is a system architecture diagram of an image processing system according to an embodiment of the present application;
FIG. 14 is a system architecture diagram of an image processing system according to an embodiment of the present application;
fig. 15 is a system architecture diagram of an image processing system according to an embodiment of the present application.
Detailed Description
In order to facilitate an understanding of the present application, a more complete description of the present application will now be provided with reference to the relevant figures. Preferred embodiments of the present application are shown in the accompanying drawings. This application may, however, be embodied in many different forms and is not limited to the embodiments described herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this application belongs. The terminology used herein in the description of the application is for the purpose of describing particular embodiments only and is not intended to be limiting of the application. The term "and/or" as used herein includes any and all combinations of one or more of the associated listed items.
Embodiments of the present invention provide an image processing method, system, and computer device, which may be applied to cinema or any other scene where special processing of images is required to realize theft prevention or other scene where original video and recorded video need to be distinguished. After the method, the system and the computer equipment provided by the embodiments of the application are applied to image processing, the recorded video can be obviously distinguished from the original video in the color gamut, on one hand, the method, the system and the computer equipment are favorable for distinguishing pirated video from recorded video, enabling recorded definition to be higher, being easy to distinguish, being favorable for people with legal consciousness to reject the pirated video, on the other hand, the color gamut of the recorded video can be uncoordinated through the processing in the color gamut, being unfavorable for long-term watching of eyes, and objectively preventing continuous watching.
Referring to fig. 1, an exemplary image processing method according to an embodiment of the present application is shown, and the method includes the following steps:
s12: determining a primary color frame and determining an inverse color frame of the primary color frame based on the primary color frame;
the video is composed of continuously output video frames, the video frames are transmitted to a display device for human eyes to watch in a short time, and the video frames are displayed according to a certain sequence, namely, the video seen by the human eyes is formed. The primary color frames need to be determined before the video frames of the video are processed. The primary color frame may be any image frame.
As an alternative embodiment, the primary color frames may be predefined by the user. For example, in an image processing system, there is an image library, in which several images are stored, any image in the image library can be used as a primary color frame. When determining the primary color frame, selecting from the image library. The selection may be random, i.e. each time the video is processed, one primary color frame is randomly selected from the image library. The selection may also be made according to predefined rules. For example, the primary color frames of different videos are sequentially selected according to the image ordering in the image library, or other rules (e.g., according to content tags of videos, classification tags, results of image recognition on key frames in videos). When processing different videos, the same primary color frame may be used, for example, two videos are randomly selected to the same image as the primary color frame, or two videos have the same or similar labels, so that the same image is obtained as the primary color frame according to the rule screening.
Of course, instead of selecting an image as a primary frame in a predefined image library in the image processing system, the primary frame may be defined in other ways, for example, by randomly determining an image as a primary frame before each video is played, so that playing the same video at different times may result in different image processing results.
In some particular embodiments, the primary color frames may be solid color image frames, such as solid red frames, solid green frames, solid blue frames. In this case, the user or the system can obtain the primary color frame more conveniently in the temporary scene, and is also beneficial to on-site display after image processing, so as to reduce or avoid the interference of the image processing on the display video (original video).
As another alternative, the primary color frames may be selected from within video frames included in video received by the image processing system. It will be appreciated that in this embodiment, the selection of the corresponding primary frames is also different depending on the different primary frame insertion strategies, the same primary frame may be used throughout the image processing, or the primary frames may be transformed during the image processing, i.e. the primary frames may comprise a plurality of different frames. And is described in detail hereinafter.
After the primary color frame is determined, the primary color frame can be subjected to the color reversal processing, so that the color reversal frame of the primary color frame is obtained. When the reverse color frame and the primary color frame are overlapped and displayed on the display device, a white screen is displayed, and the decorative display is displayed to display a pure white frame.
S14: determining a plurality of groups of inverse color parameters based on the inverse color frame, wherein the inverse color frame is obtained after the inverse color parameters of all groups are overlapped;
after the anti-color frame is obtained, a plurality of groups of anti-color parameters can be obtained from the anti-color frame, and the plurality of groups of anti-color parameters are applied to the processing of the video frame of the video received by the image processing system. The inverse color frame is obtained after all the inverse color parameter groups are overlapped, or the inverse color parameters can be regarded as the decomposition factors of the inverse color frame, and the sum of all the decomposition factors (inverse color parameters) needs to obtain the inverse color frame before decomposition. Specifically, the inverse color frame includes several pixels, each pixel has a chromaticity value (R, G, B), the inverse color frame is processed microscopically by each pixel, and the inverse color parameters are chromaticity values (Ri, gi, bi) corresponding to the pixel, and the chromaticity values of all the inverse color parameters obtained after processing the pixel are added to obtain the chromaticity value of the pixel of the inverse color frame, i.e., r=r1+r2+ … … +ri; g=g1+g2+ … … +gi; b=b1+b2+ … … +bi. Therefore, the inverse color parameter is a matrix formed by the chromaticity values of the plurality of pixel points.
For example, referring to fig. 2, after the primary color frame a is subjected to the inverse color processing, an inverse color frame T is obtained, and then the inverse color frame is processed to obtain a plurality of sets of inverse color parameters, where the inverse color parameters are actually a chromaticity value parameter matrix of a plurality of pixel points, and the chromaticity value parameter matrix is displayed on a display device, that is, decomposed frames C1 and C2 … … Ci.
In an alternative embodiment, in the step of determining the sets of inverse color parameters based on the inverse color frame, when the inverse color parameters are determined, a set of inverse color parameters is obtained on the basis of all pixels of the inverse color frame in units of the inverse color frame. That is, the number of pixels of the inverse color parameter (i.e., the number of chromaticity values) is equal to the number of pixels of the inverse color frame.
In one or more embodiments, a set of weight values may be predetermined, and the inverse frame is then decomposed according to the reorganized weight values. For example, referring to fig. 3, in one embodiment, a set of inverse color parameters is obtained based on all pixels of the inverse color frame in units of inverse color frames, specifically:
s142: determining a set of first weight values;
s144: a set of inverse color parameters is determined based on the color gamut parameters of the inverse color frame and a first weight value.
Firstly, a group of first weight values are determined, then, each pixel point of the inverse color frame is processed in sequence, color gamut parameters (such as chromaticity values) of each pixel point are combined with the first weight values to obtain inverse color parameters corresponding to the pixel point, and a matrix formed by the inverse color parameters of all the pixel points is a group of inverse color parameters. For example, in a specific embodiment, the color gamut parameter and the first weight value are multiplied to obtain the inverse color parameter.
Referring to fig. 4, the inverse color frame has a plurality of pixels, each pixel has the same or different chromaticity values, for convenience of illustration, only one pixel is shown in fig. 3, the pixel has chromaticity values (R2, G2, B2), and after determining several sets of weight values (x 1, y1, z 1), (x 2, y2, z 2) … … (xi, yi, zi), the corresponding inverse color parameters can be obtained, so that for better illustrating the inverse color parameters, a set of inverse color parameters is shown in the figure by the decomposition frame of the inverse color parameters after the previous set on the display device. The decomposition frame C1 is obtained according to the inverse color frame T and the weight value (x 1, y1, z 1), the decomposition frame C2 is obtained according to the inverse color frame T and the weight value (x 2, y2, z 2), and the decomposition frame Ci is obtained according to the inverse color frame T and the weight value (xi, yi, zi). For example, multiplying the chromaticity value by the weight value to obtain an inverse color parameter value, and, illustratively, ri=r2×xi; gi=g2×yi; bi=b2×zi.
Of course, the first weight value may be other manners, for example, the first weight value is a percentage value, and all the first weight values are added to be equal to 1, so as to ensure that after all the anti-color parameters are overlapped, an anti-color frame can be obtained.
Referring to fig. 5, as another alternative embodiment, in the step of determining a plurality of sets of inverse color parameters based on the inverse color frame, after dividing the inverse color frame into a plurality of areas, the inverse color parameters are obtained in the area units. The step of determining a plurality of sets of inverse color parameters based on the inverse color frame specifically includes:
s146: dividing the inverse color frame into a plurality of areas, and respectively determining a group of second weight values corresponding to each area;
s148: determining a set of inverse color parameters according to the color gamut parameters of the inverse color frame and the plurality of second weight values; and adding the areas corresponding to the plurality of second weight values to form all display areas of the reverse color frame.
When the region division is performed, the division can be performed according to a preset template, when the inverse color parameters are determined, the inverse color parameters are obtained by taking the divided region as a unit, and then each decomposition frame corresponds to a plurality of groups of inverse color parameters, and the number of groups of inverse color parameters is larger than the number of the obtained decomposition frames. But specifically to a single pixel, the sum of the chrominance values of the pixels corresponding to that pixel for all the decomposition frames is still equal to the chrominance value of the pixel for the inverse color frame.
It will be appreciated that the region division for the inverse color frame is a virtual division, rather than an actual division.
Referring to fig. 6, in the embodiment shown in fig. 6, each inverse color frame is divided into 2 regions, and each pixel point (R1, G1, B1), (R2, G2, B2) in two regions T1, T2 is shown. In determining the second weight values, a set of second weight values (x 11, y11, z 11), (x 12, y12, z 12) … … (x 1i, y1i, z1 i) are determined corresponding to the region T1; corresponding to the region T2, a set of second weight values (x 21, y21, z 21), (x 22, y22, z 22) … … (x 2j, y2j, z2 j) is determined. Then, the pixel points in the areas T1 and T2 are respectively processed, where the area T1 is processed by the parameter sets (x 1i, y1i, z1 i), the area T2 is processed by the parameter sets (x 2j, y2j, z2 j), and the parameter set processed by any one corresponding area T1 and the parameter set processed by any one corresponding area T2 can form the parameter set corresponding to one decomposition frame Ci. Of course, when the inverse color frame is divided into more regions, the parameters processed by the three regions form a parameter set corresponding to one decomposition frame Ci, that is, the regions corresponding to the plurality of second weight values for determining one set of inverse color parameters are added to form all the display regions of the inverse color frame.
S16: acquiring a video frame code stream to be processed, and inserting a primary color frame into the video frame code stream;
after the primary color frame and the reverse color parameter set are determined, the video can be subjected to image processing. The image processing system receives the video frame code stream and processes the image according to the primary color frames and the anti-color parameter sets.
In the image processing process, a primary color frame is firstly inserted into a video frame, and the video frame is processed by using the primary color frame as a reference and using the anti-color parameters. The insertion of the primary color frames may be random, i.e. inserted into the video frames randomly, and the video frames between two primary color frames may be equal or unequal.
In some of these embodiments, the primary frames are inserted at a preset period T, where T is greater than the number of inverse parameters, when inserting the primary frames. And inserting primary color frames according to a preset period T, wherein T is the number of video frames between two primary color frames, and the number of the video frames between the two primary color frames is certain.
S18: determining a video frame between two primary color frames, and processing the video frame between the two primary color frames based on the inverse color parameters;
after the primary color frames are inserted, the video frames of the two primary color frame times are determined, namely, the determined video frames between the two primary color frames can be processed by utilizing the inverse color parameters, and the processing method is similar to a method for obtaining the decomposition frames according to the inverse color frames, and only the inverse color frames are replaced by the video frames, which is not repeated here. And after processing, obtaining a rendered video frame.
In one or more embodiments, multiple sets of anti-color parameters may be utilized to process the same video frame. Specifically, referring to fig. 7, the determining a video frame between two primary color frames and processing the video frame between two primary color frames based on the inverse color parameter specifically includes:
s182: determining a video frame between two primary color frames;
s184: each video frame corresponds to one or more groups of anti-color parameters, and the anti-color parameters corresponding to the video frame are utilized to process the video frame.
For example, when the number of video frames at two primary color frame times is less than the number of sets of inverse color parameters, in order to ensure the display effect of the original video, at least one burst of video frames must be processed multiple times, each processing applying a set of inverse color parameters.
Referring to fig. 8, fig. 8 shows two primary color frames and a video frame between the two primary color frames, wherein the two primary color frames include 4 frames of video frames, the inverse color parameters (xi, yi, zi) have 5 groups, and even if each group of video is processed by using one group of inverse color parameters, one group of inverse color parameters still remain, and the video in this state is input into the display device, so that the video before being processed forms a significant difference, therefore, the 5 th group of inverse color parameters must be superimposed into the 4 frames of video frames, and the video frame superimposed with the 5 th group of inverse color parameters is processed twice, and the two groups of inverse color parameters are applied.
Of course, when the number of video frames of two primary color frame times is greater than or equal to the number of sets of the anti-color parameters, the same frame of video frames can also be processed by multiple sets of the anti-color parameters.
It will be appreciated that when the inverse color parameters are partitioned, the processing of the video frames is also partitioned.
In other embodiments, if the number of video frames between two primary color frames is sufficient, one frame of video frame is processed with each set of anti-color frames. Specifically, the determining a video frame between two primary color frames and processing the video frame between two primary color frames based on the inverse color parameter specifically includes:
s186: determining a video frame between two primary color frames;
s188: if the number of the video frames between the two primary color frames is greater than or equal to the number of the anti-color parameters, each group of anti-color parameters corresponds to one video frame, and the video frame is processed by utilizing the group of anti-color parameters.
When the number of video frames between two primary color frames is greater than or equal to the number of the anti-color parameters, enough video frames are available for processing the anti-color parameters, therefore, each group of anti-color parameters can correspond to one video frame, and the video frame is processed by utilizing the group of anti-color parameters, so that a rendered video frame is obtained.
As described in step 12, the primary color frames may be determined from within a video frame of the video. When the primary color frame is selected from the video frame, then in step S16, the primary color frame may or may not be inserted into the video frame. When the video frame is inserted into the video frame, the video has two frames of the same video frame, and the insertion position of the primary color frame is preferably the adjacent position of the inserted same video frame so as to avoid the interference of the field picture, at the moment, the method is equivalent to repeating one frame of video frame, and the influence of the repeated video frame is reduced by utilizing the subsequent video frame, and the display time slot of the video frame is extremely short, so that the video frame is invisible to naked eyes and has negligible influence on the field picture. When the primary color frame is not inserted into the video frame, after the video frame is rendered (the processed video frame) is superimposed, the video frame corresponding to the frame is subtracted, and the influence is smaller because the picture repetition degree of the continuous video frame time is higher.
When primary frames are selected from within the video frames and are inserted between the video frames, each time a primary frame is inserted, the primary frame is redetermined, i.e. each primary frame is different, in order to make the primary frame the same as the adjacent video frame to avoid affecting the video frame in the field.
S110: and outputting the processed video frame code stream to a display device for display.
After the video frame processing is completed, the video frame can be output to a display device for display. The video frames displayed on the display device, including rendered video frames (i.e., video frames processed with the inverse color parameters), may also include at least one of primary color frames, primary video frames.
Fig. 10 shows a schematic diagram of a processed video frame being sent to a display device for display, where a rendered video frame is processed by using a reverse color compared with an original video frame, and has a color cast effect, when recording is performed by using an electronic device, the shutter cannot be kept open for a long time due to the limitation of the shutter of the electronic device, so long as the shutter is opened at intervals, the video frame is captured by the electronic device, and once the rendered video frame is missed, the primary color frame cannot be eliminated due to the fact that only all the rendered video frames are overlapped, and the color cast phenomenon is generated. When the video frames are further partitioned, different areas of the single-frame video frames have color cast, the color cast of the different areas is overlapped with the color cast of the video frames, color cast is further increased, normal display can be influenced even, the recording effect of the video is influenced, the watching effect of the recorded video is greatly reduced, and differences are formed between the recorded video and the original video, so that people can watch the original video.
For example, in fig. 10, after rendering video frames S01, S11, S12, S13, and S21, respectively, on the basis of the original video frames, the video frames are respectively biased to R, biased to G, biased to B, and biased to R, and due to the shutter open time of the electronic device, the video frames captured by the electronic device capture S01, S11, S13, and S21, during this period, the recorded video will be greatly biased to red, which affects the viewing effect.
It should be noted that the method according to the embodiments of the present invention is not limited to the steps and the sequence shown in the flowcharts, and the steps in the flowcharts may be added, removed, or changed according to different needs.
Referring to fig. 11, the present application further provides an image processing system 10, which includes a color reversal determining unit 110, a parameter determining unit 120, an inserter 130, an image processor 140, and an output unit 150. It should be understood that, corresponding to the embodiments of the above virtual image processing method, the image processing system 10 may include some or all of the components or devices shown in fig. 5, and the functions of the components or devices will be described in detail below, where it should be noted that the same nouns related to the same nouns in the embodiments of the above virtual image processing method and their specific explanation may also be applied to the functional descriptions of the components or devices below, so that the repetition is avoided.
An inverse color determining section 110 for determining a primary color frame and determining an inverse color frame of the primary color frame based on the primary color frame;
a parameter determining unit 120 for determining several sets of inverse color parameters based on the inverse color frame;
an inserter 130, configured to obtain a video frame code stream to be processed, and insert a primary color frame into the video frame code stream;
an image processor 140 for determining a video frame between two primary color frames and processing the video frame between the two primary color frames based on the inverse color parameters; and
and the output unit 150 is configured to output the processed video frame code stream to a display device for display.
In some of these embodiments, the interpolator 130 interpolates the primary color frames at a preset period T, where T is greater than the number of anti-color parameters
Referring to fig. 12, in some of these embodiments, the parameter determining unit 120 may include:
a first weight determiner 121 for determining a set of first weight values;
the first inverse color parameter determiner 123 is configured to determine a set of inverse color parameters based on the color gamut parameters of the inverse color frame and a first weight value.
Referring to fig. 13, in other embodiments, the parameter determining unit 120 may include:
a second weight determiner 125, configured to determine a set of second weight values corresponding to each of the plurality of regions divided according to the inverse color frame;
a second inverse color parameter determiner 127 for determining a set of inverse color parameters based on the color gamut parameters of the inverse color frame and the plurality of second weight values; and determining all display areas of the inverse color frame by adding areas corresponding to a plurality of second weight values of a group of inverse color parameters.
Referring to fig. 14, in some embodiments, the image processor 140 includes:
a video frame determining section 141 for determining a video frame between two primary color frames;
a first image processing component 143, configured to process each video frame with one or more sets of inverse color parameters corresponding to the video frame.
Referring to fig. 15, in other embodiments, the image processor 140 includes:
a video frame determining section 141 for determining a video frame between two primary color frames;
a comparison component 145 for comparing the number of video frames between two primary color frames to the magnitude of the number of inverse color parameters;
the second image processing component 147 is configured to process a video frame with a set of inverse color parameters when the number of video frames between two primary color frames is greater than or equal to the number of inverse color parameters.
An embodiment of the present application also provides a machine-readable storage medium, on which a computer program is stored, wherein the computer program, when executed by a processor, implements the method of any of the embodiments described above.
The components/modules/units of the system/computer apparatus integration, if implemented as software functional units and sold or used as separate products, may be stored in a computer readable storage medium. Based on such understanding, the present application may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and the computer program may implement the steps of each method embodiment described above when executed by a processor. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable storage medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a Read-only memory (ROM), a random access memory (RAM, random Access Memory), an electrical carrier signal, a telecommunications signal, a software distribution medium, and so forth. It should be noted that the computer readable medium contains content that can be appropriately scaled according to the requirements of jurisdictions in which such content is subject to legislation and patent practice, such as in certain jurisdictions in which such content is subject to legislation and patent practice, the computer readable medium does not include electrical carrier signals and telecommunication signals.
The present application also provides a computer device comprising: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured to perform the method of any of the embodiments described above via execution of the executable instructions.
In the several embodiments provided herein, it should be understood that the disclosed systems and methods may be implemented in other ways. For example, the system embodiments described above are merely illustrative, e.g., the division of the components is merely a logical functional division, and additional divisions may be implemented in practice.
In addition, each functional module/component in the embodiments of the present application may be integrated in the same processing module/component, or each module/component may exist alone physically, or two or more modules/components may be integrated in the same module/component. The integrated modules/components described above may be implemented in hardware or in hardware plus software functional modules/components.
It will be apparent to those skilled in the art that the embodiments of the present application are not limited to the details of the above-described exemplary embodiments, but may be embodied in other specific forms without departing from the spirit or essential characteristics thereof. The present embodiments are, therefore, to be considered in all respects as illustrative and not restrictive, the scope of embodiments being indicated by the appended claims rather than by the foregoing description, and all changes which come within the meaning and range of equivalency of the claims are therefore intended to be embraced therein. Any reference sign in a claim should not be construed as limiting the claim concerned. Furthermore, it is evident that the word "comprising" does not exclude other elements or steps, and that the singular does not exclude a plurality. A plurality of units, modules or means recited in a system, means or terminal claim may also be implemented by means of software or hardware by means of one and the same unit, module or means. The terms first, second, etc. are used to denote a name, but not any particular order.
The above examples merely represent a few embodiments of the present application, which are described in more detail and are not to be construed as limiting the scope of the invention. It should be noted that it would be apparent to those skilled in the art that various modifications and improvements could be made without departing from the spirit of the present application, which would be within the scope of the present application. Accordingly, the scope of protection of the present application is to be determined by the claims appended hereto.

Claims (10)

1. An image processing method, comprising:
determining a primary color frame and determining an inverse color frame of the primary color frame based on the primary color frame;
determining a plurality of groups of inverse color parameters based on the inverse color frame, wherein the inverse color frame is obtained after the inverse color parameters of all groups are overlapped;
acquiring a video frame code stream to be processed, and inserting a primary color frame into the video frame code stream;
determining a video frame between two primary color frames, and processing the video frame between the two primary color frames based on the inverse color parameters;
outputting the processed video frame code stream to a display device for display;
wherein, confirm several groups of inverse color parameters based on the inverse color frame, include specifically: and decomposing the chromaticity value of each pixel point on the inverse color frame to obtain a plurality of inverse color parameters, and adding the chromaticity values of all the inverse color parameters obtained after processing of each pixel point to obtain the chromaticity value of the pixel point of the inverse color frame.
2. The image processing method according to claim 1, wherein the plurality of sets of inverse color parameters determined based on the inverse color frame specifically comprises:
when the inverse color parameters are determined, a group of inverse color parameters are obtained on the basis of all pixel points of the inverse color frame by taking the inverse color frame as a unit.
3. The image processing method according to claim 2, wherein the obtaining a set of inverse color parameters based on all pixels of the inverse color frame specifically includes:
determining a set of first weight values;
a set of inverse color parameters is determined based on the color gamut parameters of the inverse color frame and a first weight value.
4. The image processing method according to claim 1, wherein the plurality of sets of inverse color parameters determined based on the inverse color frame specifically comprises:
dividing the inverse color frame into a plurality of areas, and respectively determining a group of second weight values corresponding to each area;
determining a set of inverse color parameters according to the color gamut parameters of the inverse color frame and the plurality of second weight values; and determining all display areas of the inverse color frame by adding areas corresponding to a plurality of second weight values of a group of inverse color parameters.
5. The image processing method according to claim 1, wherein the inserting the primary color frames into the video frame code stream specifically comprises:
the primary color frames are inserted according to a preset period T, wherein T is larger than the number of the inverse color parameters.
6. The image processing method according to claim 1, wherein the determining the video frame between the two primary color frames and processing the video frame between the two primary color frames based on the inverse color parameter specifically comprises:
determining a video frame between two primary color frames;
each video frame corresponds to one or more groups of anti-color parameters, and the anti-color parameters corresponding to the video frame are utilized to process the video frame.
7. The image processing method according to claim 1, wherein the determining the video frame between the two primary color frames and processing the video frame between the two primary color frames based on the inverse color parameter specifically comprises:
determining a video frame between two primary color frames;
if the number of the video frames between the two primary color frames is greater than or equal to the number of the anti-color parameters, each group of anti-color parameters corresponds to one video frame, and the video frame is processed by utilizing the group of anti-color parameters.
8. An image processing system, comprising:
a color reversal determining section for determining a primary color frame and determining a color reversal frame of the primary color frame based on the primary color frame;
a parameter determining unit for determining a plurality of sets of inverse color parameters based on the inverse color frame;
the inserter is used for acquiring a video frame code stream to be processed and inserting the primary color frames into the video frame code stream;
an image processor for determining a video frame between two primary color frames and processing the video frame between the two primary color frames based on a color reversal parameter; and
the output component is used for outputting the processed video frame code stream to the display device for display;
wherein, the parameter determining part determines a plurality of groups of inverse color parameters based on the inverse color frame, specifically: the parameter determining part decomposes the chromaticity value of each pixel point on the inverse color frame to obtain a plurality of inverse color parameters, and the chromaticity values of all the inverse color parameters obtained after processing of each pixel point are added to obtain the chromaticity value of the pixel point of the inverse color frame.
9. A computer device, comprising: a processor; a memory for storing executable instructions of the processor; characterized in that the processor is configured to perform the steps of the method of any of the preceding claims 1-7 via execution of the executable instructions.
10. A machine readable storage medium having stored thereon a computer program, which when executed by a processor, implements the steps of the method of any of claims 1 to 7.
CN202110205015.1A 2021-02-24 2021-02-24 Image processing method, system, equipment and storage medium Active CN112969080B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110205015.1A CN112969080B (en) 2021-02-24 2021-02-24 Image processing method, system, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110205015.1A CN112969080B (en) 2021-02-24 2021-02-24 Image processing method, system, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN112969080A CN112969080A (en) 2021-06-15
CN112969080B true CN112969080B (en) 2023-06-06

Family

ID=76285868

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110205015.1A Active CN112969080B (en) 2021-02-24 2021-02-24 Image processing method, system, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN112969080B (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187756A (en) * 2015-08-04 2015-12-23 无锡赛睿科技有限公司 Video playing method capable of preventing pirating and copying
CN110717868A (en) * 2019-09-06 2020-01-21 上海交通大学 Video high dynamic range inverse tone mapping model construction and mapping method and device

Family Cites Families (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6104812A (en) * 1998-01-12 2000-08-15 Juratrade, Limited Anti-counterfeiting method and apparatus using digital screening
US7386125B2 (en) * 2002-10-28 2008-06-10 Qdesign Usa, Inc. Techniques of imperceptibly altering the spectrum of a displayed image in a manner that discourages copying
KR20100045487A (en) * 2007-08-21 2010-05-03 톰슨 라이센싱 Digital light processing anti-camcorder switch
US9094656B2 (en) * 2010-09-13 2015-07-28 Thomson Licensing Method for sequentially displaying a colour image
ES2654200T3 (en) * 2011-05-19 2018-02-12 Naxos Finance Sa System to provide private display of multimedia contents of a screen
US9251760B2 (en) * 2013-07-02 2016-02-02 Cisco Technology, Inc. Copy protection from capture devices for photos and videos
CN103986979B (en) * 2014-05-14 2019-05-24 快车科技有限公司 A kind of copy-right protection method and system
CN109886855A (en) * 2019-02-22 2019-06-14 福建三锋电子技研有限公司 A kind of non display watermark generation method of human eye
CN110381338B (en) * 2019-07-17 2022-02-18 腾讯科技(深圳)有限公司 Video data processing and analyzing method, device, equipment and medium
CN110768964B (en) * 2019-09-30 2022-01-04 深圳市奥拓电子股份有限公司 Video image coding processing method, system and storage medium
CN110806845B (en) * 2019-09-30 2021-10-26 深圳市奥拓电子股份有限公司 Image display control method, system and storage medium
CN111988672A (en) * 2020-08-13 2020-11-24 北京达佳互联信息技术有限公司 Video processing method and device, electronic equipment and storage medium

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105187756A (en) * 2015-08-04 2015-12-23 无锡赛睿科技有限公司 Video playing method capable of preventing pirating and copying
CN110717868A (en) * 2019-09-06 2020-01-21 上海交通大学 Video high dynamic range inverse tone mapping model construction and mapping method and device

Also Published As

Publication number Publication date
CN112969080A (en) 2021-06-15

Similar Documents

Publication Publication Date Title
EP1557031B1 (en) Techniques of imperceptibly altering the spectrum of a displayed image in a manner that discourages copying
EP3457359B1 (en) Method and apparatus for representing image granularity by one or more parameters
EP1665811B1 (en) Methods of processing and displaying images and display device using the methods
US20070242880A1 (en) System and method for the identification of motional media of widely varying picture content
Zhang et al. Kaleido: You can watch it but cannot record it
Seuntiëns et al. Viewing experience and naturalness of 3D images
CN110225265A (en) Advertisement replacement method, system and storage medium during video transmission
Bist et al. Tone expansion using lighting style aesthetics
CN110806845B (en) Image display control method, system and storage medium
US8150206B2 (en) Method and apparatus for representing image granularity by one or more parameters
CN110768964B (en) Video image coding processing method, system and storage medium
CN112969080B (en) Image processing method, system, equipment and storage medium
JP5142335B2 (en) Method and apparatus for displaying video pictures
Yue et al. Subjective quality assessment of animation images
Lenzen HDR for legacy displays using Sectional Tone Mapping
Beitzel et al. The Effect of Synthetic Shutter on Judder Perception—an HFR & HDR Data Set and User Study
TWI472223B (en) Image generating method
Xu Capturing and post-processing of stereoscopic 3D content for improved quality of experience
CN118264781A (en) Frame processing and displaying method and device for color image data
CN117278804A (en) File encryption method, device, equipment and storage medium
Schubin Why 4K: vision & television
CN112399148A (en) Virtual monitoring method and device based on virtual three-dimensional scene
Geuens Through the Looking Glasses: From the Camera Obscura to Video Assist
Björk HDR and its influence on visual components
Schnuelle Image compression evaluation for digital cinema: the case of Star Wars: Episode II

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
TA01 Transfer of patent application right

Effective date of registration: 20230516

Address after: Unit 903, 370 Chengyi street, phase 3, software park, Xiamen City, Fujian Province, 361000

Applicant after: XIAMEN ATTIOT INTELLIGENT TECHNOLOGY CO.,LTD.

Address before: 1020, international culture building, 3039 Shennan Middle Road, Futian District, Shenzhen, Guangdong 518000

Applicant before: Liu Guihua

TA01 Transfer of patent application right
GR01 Patent grant
GR01 Patent grant