CN110149507A - Method for processing video frequency, data processing equipment and storage medium - Google Patents

Method for processing video frequency, data processing equipment and storage medium Download PDF

Info

Publication number
CN110149507A
CN110149507A CN201811511097.7A CN201811511097A CN110149507A CN 110149507 A CN110149507 A CN 110149507A CN 201811511097 A CN201811511097 A CN 201811511097A CN 110149507 A CN110149507 A CN 110149507A
Authority
CN
China
Prior art keywords
picture frame
video
color
model
dynamic range
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201811511097.7A
Other languages
Chinese (zh)
Other versions
CN110149507B (en
Inventor
翟海昌
廖念波
汪亮
舒军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Tencent Technology Shenzhen Co Ltd
Original Assignee
Tencent Technology Shenzhen Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Tencent Technology Shenzhen Co Ltd filed Critical Tencent Technology Shenzhen Co Ltd
Priority to CN201811511097.7A priority Critical patent/CN110149507B/en
Publication of CN110149507A publication Critical patent/CN110149507A/en
Application granted granted Critical
Publication of CN110149507B publication Critical patent/CN110149507B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/253Fusion techniques of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/68Circuits for processing colour signals for controlling the amplitude of colour signals, e.g. automatic chroma control circuits
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Signal Processing (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Processing (AREA)

Abstract

This application discloses method for processing video frequency, data processing equipment and storage mediums.Wherein, a kind of method for processing video frequency of the application, comprising: obtain the first video of standard dynamic range;It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in the first video, obtains the second video;And it is based on trained image adjustment model, the second video is adjusted to the third video of high dynamic range.Color adaptation can be carried out to the video content of SDR film source, to obtain the HDR video with high dynamic range and richer color by trained color adaptation model according to the method for processing video frequency of the application.

Description

Method for processing video frequency, data processing equipment and storage medium
Technical field
This application involves video technique field more particularly to method for processing video frequency, data processing equipment and storage medium.
Background technique
With the development of video display technology, requirement of the user to video display effect is higher and higher.High dynamic range The display mode of (Higher Dynamic Range, be abbreviated as HDR) can be than standard dynamic range (Standard Dynamic Range is abbreviated as SDR) display mode picture more true to nature be provided show.The video of HDR format has wide colour gamut and high Contrast.The application of HDR is also more and more extensive.However, the film source for thering is significant portion of video to use SDR.How by SDR video Picture effect to rise to the picture effect of HDR be a problem to be solved.
Summary of the invention
Present applicant proposes a kind of video processing schemes, can be improved the dynamic range of image content.
On the one hand according to the application, a kind of method for processing video frequency is provided, comprising: obtain the first view of standard dynamic range Frequently;It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in first video, obtains To the second video;And it is based on trained image adjustment model, second video is adjusted to the of high dynamic range Three videos.
On the one hand according to the application, a kind of data processing equipment is provided, comprising: processor and memory, the processor For: obtain the first video of standard dynamic range;It is to be moved with height by the color space conversion of picture frame in first video The corresponding color space of the colour gamut of state range, obtains the second video;Based on trained image adjustment model, by described second Video is adjusted to the third video of high dynamic range.
On the one hand according to the application, a kind of storage medium is provided, one or more programs are stored with, it is one or more of Program includes instruction, and described instruction by data processing equipment when being executed, so that the data processing equipment executes the application's Method for processing video frequency.
To sum up, it can be based on trained color adaptation model according to the video processing schemes of the application, to SDR film source Video content carry out color adaptation (be referred to as color stretching), to obtain with high dynamic range and richer color HDR video.
Detailed description of the invention
In order to more clearly explain the technical solutions in the embodiments of the present application, make required in being described below to embodiment Attached drawing is briefly described, it should be apparent that, the drawings in the following description are only some examples of the present application, for For those of ordinary skill in the art, without any creative labor, it can also be obtained according to these attached drawings His attached drawing.
Fig. 1 shows the schematic diagram of the application scenarios 100 according to some embodiments of the application;
Fig. 2 shows the flow charts according to the method for processing video frequency 200 of the application some embodiments;
Fig. 3 A shows the flow chart of the method 300 of the second video of acquisition according to some embodiments of the application;
Fig. 3 B shows the chromaticity diagram according to some embodiments of the application;
It is profile connecting space that Fig. 4, which is shown according to the color space conversion by picture frame of some embodiments of the application, The flow chart of method 400;
Fig. 5 shows the method 500 for generating the second video according to the 9th picture frame according to the application some embodiments Flow chart;
Fig. 6 is shown according to the third videos that the second video is adjusted to high dynamic range of some embodiments of the application The flow chart of method 600;
Fig. 7 shows the flow chart of the method for processing video frequency 700 according to some embodiments of the application;
Fig. 8 A shows the first picture frame of a sample according to some embodiments of the application;
Fig. 8 B shows the second picture frame of a sample according to some embodiments of the application;
Fig. 9 shows the flow chart that the method 900 of model is adjusted according to the training image of some embodiments of the application;
Figure 10 shows the color adaptation ginseng of each pixel in the determination third picture frame according to some embodiments of the application The flow chart of several methods 1000;
Figure 11 A shows the color adaptation ginseng of each pixel in the determination third picture frame according to some embodiments of the application The flow chart of several methods 1100;
Figure 11 B shows the schematic diagram of the first nerves network according to some embodiments of the application;
Figure 11 C shows the second deep neural network 1020 of utilization according to some embodiments of the application, third depth mind The schematic diagram of fourth feature figure is extracted through network 1030 and the 4th deep neural network 1040;
Figure 11 D shows the schematic diagram of the fifth nerve network 1050 according to some embodiments of the application;
Figure 11 E shows the schematic diagram of the output color adjustment parameter according to some embodiments of the application;And
Figure 12 shows the composite structural diagram of a data processing equipment.
Specific embodiment
Below in conjunction with the attached drawing in the embodiment of the present application, technical solutions in the embodiments of the present application carries out clear, complete Site preparation description, it is clear that the described embodiments are only a part but not all of the embodiments of the present application.Based on this Embodiment in application, every other reality obtained by those of ordinary skill in the art without making creative efforts Example is applied, shall fall in the protection scope of this application.
In application scenes, various user equipmenies can support the video playing of high dynamic range.In some implementations In example, the piece source format of video content is standard dynamic range.It is SDR to play original film source in the equipment for supporting HDR Video content, user can by the video of standard dynamic range carry out color space conversion operation, to obtain HDR format Video.Here, the video for the HDR format that conversion operation obtains is actually consistent with the dynamic range of the video of original SDR.Change speech It, conversion operation is only to convert broadcast format, and there is no be adjusted the colouring information of picture frame in former SDR video.? Then some embodiments, user can obtain conversion operation based on the empirically determined hue mapping parameter to color adaptation Each pixel carries out color adaptation in the video of HDR format, with improve the dynamic range of video content and improve image detail and The consistency of real scene.However, the regulating effect of hue mapping parameter determined by rule of thumb is to be improved.For example, root Regulating effect according to empirically determined hue mapping parameter is unstable, is easy to appear overexposure, brightness regulation result and true field The problems such as scape is inconsistent.
Fig. 1 shows the schematic diagram of the application scenarios 100 according to some embodiments of the application.
As shown in Figure 1, application scenarios 100 may include service system 102 and user equipment 104 (such as user equipment 104a and 104b etc.).Service system 102 can be logical by one or more networks 106 and one or more user equipmenies 104 Letter.User equipment 104 can support the video playing of HDR format.User equipment 104 can include but is not limited to palm type calculation Machine, wearable data processing equipment, personal digital assistant (PDA), tablet computer, laptop, desktop computer, movement Phone, smart phone, Enhanced GPRS (EGPRS) mobile phone, media player, navigation equipment, game These data processing equipments of console, television set or any two or more or the combination of other data processing equipments.Service System 102 can provide the multimedia content such as video, picture to user equipment 104.Service system 102 may include one or more A server.
The example of one or more networks 106 includes local area network (LAN) and wide area network (WAN) etc..In some embodiments In, arbitrary network agreement can be used to realize one or more networks 106, including various wired or nothing in embodiments herein Wire protocol, such as, Ethernet, universal serial bus (USB), FIREWIRE, global system for mobile communications (GSM), enhancing data Gsm environment (EDGE), CDMA (CDMA), time division multiple acess (TDMA), bluetooth, WiFi, ip voice (VoIP), Wi-MAX, or Other any suitable communication protocols.
In some embodiments, the video that the Video Quality Metric of SDR can be HDR by service system 102.User equipment 104 The video of the HDR obtained from service system 102 can be played.
In some embodiments, user equipment 104 can obtain the video of SDR from service system 102.In addition, user sets Standby 104 can also obtain from the equipment (such as the video storaging equipment etc. coupled with user equipment 104) other than service system 102 Take the video of SDR.The video that the Video Quality Metric of SDR can be HDR in local by user equipment 104.
Fig. 2 shows the flow charts according to the method for processing video frequency 200 of the application some embodiments.Method for processing video frequency 200 such as can the data processing equipment as service system 102 or user equipment 104 execute.
As shown in Fig. 2, in step s 201, obtaining first video of standard dynamic range (SDR).In some embodiments In, picture frame has narrow colour gamut (Narrow Color Gamut, be abbreviated as NCG) in the first video of SDR.Here, narrow colour gamut The e.g. colour gamut of REC709 type or the colour gamut of sRGB type etc..The dynamic range of picture frame is, for example, in first video [0,100] nit.
It is corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in the first video in step S202 Color space, obtain the second video.The colour gamut of high dynamic range is referred to as wide colour gamut (WideColor Gamut, abbreviation For WCG).Wide colour gamut is, for example, the colour gamut of the types such as BT2020 or DCI-P3.Wherein, face corresponding with the colour gamut of high dynamic range The colour space for example can be the color spaces such as YUV4:4:4 of the YUV4:2:0 of wide colour gamut, wide colour gamut.
In step S203, it is based on trained image adjustment model, the second video is adjusted to high dynamic range Third video.Here, image adjustment model can be various deep neural network models.
To sum up, method for processing video frequency 200 can be by trained color adaptation model, in the video of SDR film source Hold and carries out color adaptation (being referred to as color stretching), it can be to avoid the regulating effect of empirically determined hue mapping parameter Unstable problem, and the available HDR video with high dynamic range and richer color.
In some embodiments, step S202 may be embodied as method 300.
As shown in Figure 3A, for any of the first video picture frame, method 300 can execute step S301, by image The color space conversion of frame is profile connecting space, obtains the 5th picture frame.Here, profile connecting space refers to CIE1931- XYZ.CIE1931-XYZ is standard colorimetric system, is the unified standard of chrominance distortion, color measuring and characterization.Standard face The colour space can be used as the conversion bridge of the color space of different colour gamuts.Step S301 is by being converted to the 5th image for picture frame Frame, can be in order to the subsequent colour gamut format that 5th picture frame is converted to needs.
It in step s 302, is color corresponding with the colour gamut of high dynamic range by the color space conversion of the 5th picture frame Space obtains the 9th picture frame.In other words, the color space conversion of the 5th picture frame can be the face of wide colour gamut by step S302 The colour space.The picture frame obtained after color space conversion is the 9th picture frame.For example, step S302 can be by the 5th picture frame Color space conversion is the rgb space of the colour gamut with BT2020 type.For example, Fig. 3 B is shown according to some implementations of the application The chromaticity diagram of example.As shown in Figure 3B, region 301 indicates the colour gamut of profile connecting space, and region 302 indicates wide colour gamut, region 303 Indicate narrow colour gamut.The corresponding colour gamut of the color space of any image frame is region 303, the color of the 5th picture frame in first video The corresponding colour gamut in space is region 301, and the corresponding colour gamut of color space of the 9th picture frame is 302.
In step S303, according to the 9th picture frame, the second video is generated.Specifically, step S303 can be according to Corresponding 9th picture frame of each picture frame in one video, generates the second video.Step S303 can be according to the locating depth of the second video The requirement of degree and color space, formats the 9th picture frame.Second video can be using expression high dynamic range Bit depth, such as 10 or 12.The color space of second video is, for example, YUV4:2:0.Step S303 can be by the 9th picture frame It is adjusted to 10 or 12 bit depth and is converted to YUV4:2:0, to obtain the second video.
To sum up, method 300 can be by carrying out color space conversion to the picture frame in the first video to obtain standard 5th picture frame of color space, and then the 5th picture frame is converted into wide the 9th picture frame of colour gamut, and then can be according to the 9th Picture frame obtains the second video.In short, method 300 can by the first Video Quality Metric of standard dynamic range to be regulated be to The second video using high dynamic range of color adaptation completes pretreatment operation to adjust for subsequent color.
In some embodiments, step S301 may be embodied as method 400.
As shown in figure 4, being RGB (RGB) color space by the color space conversion of picture frame, obtaining in step S401 To the 6th picture frame.First video in various user equipmenies 104 for playing, and color space format is for example, by using yuv format. In order to which the subsequent colour gamut to picture frame is converted, the color space conversion of picture frame can be that RGB color is empty by step S401 Between.
In step S402, by each color component (i.e. red component, green of the 6th picture frame in RGB color space Colouring component and blue component) floating number format is converted to, obtain the 7th picture frame.Step S402 is by the way that color component to be expressed as Floating number format, can be in order to the bit depth (adjusting the expression precision of color) of subsequent adjustment color.
In step S403, electro-optic conversion is carried out to the 7th picture frame, obtains the 8th picture frame.Step S403 can be based on Electro-optic conversion function (Electro-Optical Transfer Function, be abbreviated as EOTF) carries out electro-optic conversion.Due to Picture frame meets SDR standard in one video, and step S403 can be using the electro-optic conversion function corresponding with SDR (such as anti-gal Horse correction function), the 7th picture frame is converted from non linear color space to linear color space, to obtain linear color sky Between the 8th picture frame.
In step s 404, it is profile connecting space by the color space conversion of the 8th picture frame, obtains the 5th picture frame. 5th picture frame has profile connecting space.To sum up, method 400 can be by the color space conversion of picture frame each in the first video For profile connecting space, so as to the colour gamut of subsequent adjustment color space.
In some embodiments, it is method 500 that example, which can be implemented, in step S303.
In step S501, perception quantization (Perceptual Quantizer is abbreviated as PQ) is carried out to the 9th picture frame The photoelectric conversion of mode obtains the tenth picture frame.Here, PQ is referred to as SMPTE ST 2084, is a kind of nonlinear light Electric transfer function (Optical-Electro Transfer Function).The photoelectric conversion of PQ mode can be by the 9th image The linear color space of the wide colour gamut of frame is converted to the non linear color space of wide colour gamut, to obtain the non-linear face of wide colour gamut Tenth picture frame of the colour space.
In step S502, the bit depth of the color of the tenth picture frame is expressed as locating depth corresponding with high dynamic range Degree, obtains the 11st picture frame.Here, bit depth corresponding with high dynamic range is, for example, 10 or 12.
In step S503, each 11st picture frame obtained by each picture frame of the first video is converted into color of object The format in space obtains the second video.The color space of the 11st picture frame obtained by the picture frame of the first video is, for example, Rgb format, targeted colorspace format are, for example, YUV.Step S503 can be converted to the rgb format of the 11st picture frame Yuv format, to obtain the second video.To sum up, the first Video Quality Metric of SDR format can be color to be regulated by method 300 HDR format the second video.
In some embodiments, for any of the second video picture frame, step S203 can execute method 600.
As shown in fig. 6, in step s 601, the color adaptation of each pixel in picture frame is determined using image adjustment model Parameter.
In step S602, for any of the second video picture frame, the color tune of pixel each in picture frame is utilized Parameter is saved, the color of each pixel in picture frame is adjusted, obtains third video corresponding with the second video.To sum up, pass through method 600, step S203 can be corrected the color of each picture frame, so that third video is than the second video with higher Contrast (that is, with higher brightness range) and broader color gamut (there is richer color hierarchy).It needs Bright, color adaptation model can be various deep neural network models.Color adaptation model for example can be depth residual error Network (Deep Residual Network, be abbreviated as ResNet), deeply learn (Deep Reinforcement Learning is abbreviated as DRL) or production confrontation network (Generative Adversarial Networks, is abbreviated as The models such as GAN).To sum up, by using by the obtained color adaptation model of deep neural network model training, method 600 can be with So that color adaptation effect more approaches real scene (that is, picture captured by more true reduction)
Fig. 7 shows the flow chart of the method for processing video frequency 700 according to some embodiments of the application.Method for processing video frequency 700 such as can the data processing equipment as service system 102 or user equipment 104 execute.
As shown in fig. 7, in step s 701, obtaining the sample set for adjusting model for training image.Wherein, sample set Any of conjunction sample includes the first picture frame of standard dynamic range and the second picture frame of high dynamic range.First picture frame Image content and the second picture frame image content it is identical.For example, Fig. 8 A shows first picture frame an of sample, Fig. 8 B Show second picture frame an of sample.Although illustrated as black white image, picture frame can actually in Fig. 8 A and Fig. 8 B It is color image.Compared with the first picture frame of Fig. 8 A, the second picture frame of Fig. 8 B has higher dynamic range (i.e. higher right Than degree) and richer color (i.e. higher colour gamut).
In step S702, using sample set, training image adjusts model.Here, step S702 can use sample Set, training deep neural network model, to obtain trained image adjustment model.The deep neural network mould trained Type for example can be the models such as depth residual error network, deeply study or production confrontation network.
In addition, method for processing video frequency 700 further includes step S703-S705.Here, the embodiment of step S703-S705 Consistent with step S201-S203, which is not described herein again.
To sum up, it is based on a large amount of sample set, method 700 can be trained image adjustment model, so as to mention Hi-vision adjusts the robustness of model, so that color adaptation effect is more stable.In particular, by by deep neural network mould Type be applied to adjust video dynamic range, method 700 can make color adaptation effect more approach real scene (that is, The captured picture of more true reduction).
In some embodiments, for the first picture frame of any of sample set sample, step S702 specifically can be with It is embodied as method 900.
As shown in figure 9, being the colour gamut with high dynamic range by the color space conversion of the first picture frame in step S901 Corresponding color space obtains third picture frame.Here, the embodiment party of the conversion regime of color space and above step S202 Formula is consistent, and which is not described herein again.
In step S902, using image adjustment model, the color adjustment parameter of each pixel in third picture frame is determined. The color adjustment parameter of each pixel is used to third picture frame being adjusted to high dynamic range.It should be noted that step S702 Gradually image adjustment model can be trained by sample in sample set.Step S902 can use current image tune Section model determines color adjustment parameter.Here, " current image adjustment model " refer to step S702 execute this method 900 it Preceding existing image adjustment model.
In step S903, the color of third picture frame is adjusted using the color adjustment parameter of each pixel, obtains the 4th Picture frame.Here, the color for adjusting third picture frame is referred to as utilizing the color of color adjustment parameter correction pixels point.
In step S904, according to the difference of the second picture frame and the 4th picture frame, the model of image adjustment model is adjusted Parameter.In some embodiments, step S904 can utilize the difference correction model parameter based on modes such as gradient declines. To sum up, it is the corresponding colour gamut of HDR that method 900, which can first pass through step S901 for the color space conversion of picture frame, is then passed through The mode that step S902-904 is trained image adjustment model, can to avoid model training to regular (that is, by image The color space conversion of frame be the corresponding colour gamut of HDR mode) mistake study, to improve the accuracy of model learning.
In some embodiments, step S902 may be embodied as step in method 1000.As Figure 10 is shown according to this Shen Please in the determination third picture frame of some embodiments the method 1000 of the color adjustment parameter of each pixel flow chart.
As described in Figure 10, in step S1001, be based on image adjustment model, extract third picture frame local feature and Global characteristics.Here, local feature refers to the minutias such as texture, profile and color hierarchy information in third picture frame.This In, global characteristics can be characteristic information related with pictured scene classification.Pictured scene classification for example may include figure painting Face, natural land, automobile picture and aircraft picture etc. classification.
In step S1002, the regulating degree of each pixel in third picture frame is determined.Here, the adjusting of each pixel Degree refers to the coloration to each pixel and the regulating degree of brightness.
In the step s 1003, according to regulating degree, local feature and the global characteristics of pixel each in third picture frame, Determine the color adjustment parameter of each pixel in third picture frame.
To sum up, step S1003 can comprehensively consider the corresponding characteristic information of each pixel (i.e. global characteristics and part spy Sign) and color adaptation degree, so that the color adjustment parameter of each pixel can reduction third picture frame correspondence more true to nature Real picture.
In some embodiments, step S1001 may be embodied as step S1101-S1104 in method 1100.
As shown in Figure 11 A, in step S1101, the first deep neural network of model is adjusted based on described image, to institute It states third picture frame and carries out feature extraction, obtain the fisrt feature figure for indicating the pictorial feature of third picture frame.Here, One deep neural network is, for example, convolutional neural networks, and preliminary feature extraction can be carried out to third picture frame.Figure 11 B is shown According to the schematic diagram of the first nerves network of the application some embodiments.Here, first nerves network 1010 may include volume Lamination 1011,1012 and 1013.It should be appreciated that first nerves network 1010 is merely exemplary structure, embodiments herein It can be adjusted according to the needs the number of plies of first nerves network 1010.Step S1101 can be generated corresponding with third picture frame P1 Fisrt feature figure F1.
In step S1102, the second deep neural network based on image adjustment model carries out feature to fisrt feature figure It extracts, obtains the second feature figure for indicating the local feature of third picture frame.Here, local feature refers to third picture frame In the minutias such as texture, profile and color hierarchy information.
Based on the third deep neural network of image adjustment model in step S1103, feature is carried out to fisrt feature figure It extracts, obtains the third feature figure for indicating the global characteristics of third picture frame.Here, the third feature of global characteristics is indicated Figure can be characteristic information related with pictured scene classification.Pictured scene classification for example may include figure picture, natural wind Scape, automobile picture and aircraft picture etc. classification.
In step S1104, the 4th deep neural network based on image adjustment model is special to second feature figure and third Sign figure carries out feature extraction, obtains the fourth feature figure of fusion local feature and global characteristics.For example, Figure 11 C shows utilization Second deep neural network 1020, third deep neural network 1030 and the 4th deep neural network 1040 extract fourth feature figure Schematic diagram.As shown in Figure 11 C, the second deep neural network 1020 may include convolutional layer 1021,1022 and 1023.Third is deep Spending neural network 1 030 may include convolutional layer 1031,1032,1033,1034 and 1035.Second deep neural network 1020 can To generate second feature figure F2, third feature figure F3 is can be generated in third deep neural network 1030.4th deep neural network 1040 may include convolutional layer 1041,1042 and 1043.Step S1104 is using second feature figure F2 and third feature figure F3 as The input of four deep neural networks 1040 can export fourth feature figure F4 by the 4th deep neural network 1040.
In some embodiments, step S1002 may be embodied as step S1105.In step S1105, it is based on image tune The 5th deep neural network for saving model carries out feature extraction to third picture frame, obtains for indicating in third picture frame The fifth feature figure of the regulating degree of each pixel.For example, Figure 11 D shows the fifth nerve according to some embodiments of the application The schematic diagram of network 1050.As shown in Figure 11 D, the 5th deep neural network 1050 may include convolutional layer 1051,1052, 1053,1054 and 1055.Third picture frame P1 is inputted fifth nerve network 1050 by step S1105, can be by fifth nerve net Network 1050 exports fifth feature figure F5.
In some embodiments, step S1003 may be embodied as step S1106.In step S1106, the 4th spy is utilized Sign figure and fifth feature figure, determine the color adjustment parameter of each pixel in third picture frame.In some embodiments, the 4th is special Sign figure may include size and the consistent multiple matrixes of fifth feature figure.The size of fifth feature figure is consistent with third picture frame. Step S1106 can calculate the color adjustment parameter of each pixel in third picture frame according to following formula.
Wherein, biIndicate ith feature value in fifth feature figure, N indicates fourth feature figure Including matrix sum, j indicate fourth feature figure in matrix serial number, aJ, iIndicate i-th in j-th of matrix of fourth feature figure A characteristic value, eiIndicate the color adjustment parameter of ith pixel point in third picture frame.To sum up, step S1106 can be integrated and be examined Consider the corresponding characteristic information of each pixel (i.e. characteristic value corresponding with each pixel in each matrix of fourth feature figure) and color Regulating degree, so that the color adjustment parameter of each pixel can the corresponding true picture of reduction third picture frame more true to nature Face.
Such as Figure 11 E shows the schematic diagram of the output color adjustment parameter according to some embodiments of the application.Step S1106 can use fourth feature figure F4 and fifth feature figure F5, determine the matrix of the color adjustment parameter composition of each pixel F6.In some embodiments, fifth feature figure F5 can be and the consistent matrix of the size of third picture frame.Fourth feature Scheming F4 may include and the consistent multiple matrixes of fifth feature figure F5 size, such as F7, F8 and F9.Step S1106 can basis Above-mentioned formula calculates the color adjustment parameter (determining matrix F 6) of each pixel in third picture frame.
Figure 12 shows the composite structural diagram of a data processing equipment.In some embodiments, data processing equipment 1200 may be implemented as according to the user equipment 104 of the application.In some embodiments, data processing equipment 1200 may be implemented For according to the service system 102 of the application.As shown in figure 12, which includes one or more processor (CPU) 1202, communication module 1204, memory 1206, user interface 1210, and the communication bus for interconnecting these components 1208。
Processor 1202 can send and receive data by communication module 1204 to realize network communication and/or locally lead to Letter.
User interface 1210 includes one or more output equipments 1212 comprising one or more speakers and/or one A or multiple visual displays.User interface 1210 also includes one or more input equipments 1214.User interface 1210 It such as can receive the instruction of remote controler, but not limited to this.
Memory 1206 can be high-speed random access memory, such as DRAM, SRAM, DDR RAM or other deposit at random Take solid storage device;Or nonvolatile memory, such as one or more disk storage equipments, optical disc memory apparatus, sudden strain of a muscle Deposit equipment or other non-volatile solid-state memory devices.
The executable instruction set of 1206 storage processor 1202 of memory, comprising:
Operating system 1216, including the journey for handling various basic system services and for executing hardware dependent tasks Sequence;
Using 1218, including the various programs for realizing above-mentioned method for processing video frequency 200 and 700.
In some embodiments, processor 1202 can perform the following operations: obtain the first video of standard dynamic range; It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in first video, obtains the Two videos;And it is based on trained image adjustment model, the third that second video is adjusted to high dynamic range is regarded Frequently.
In some embodiments, processor 1202 is further used for: second video is being adjusted to high dynamic range Third video before, obtain for train described image adjust model sample set, wherein it is any in the sample set A sample includes the first picture frame of standard dynamic range and the second picture frame of high dynamic range, the picture of the first image frame Face content is identical with the image content of the second picture frame;Using the sample set, training described image adjusts model.
In some embodiments, processor 1202 is further used for: for the institute of any of sample set sample The first picture frame is stated, is that color corresponding with the colour gamut of high dynamic range is empty by the color space conversion of the first image frame Between, obtain third picture frame;Model is adjusted using described image, determines the color adaptation of each pixel in the third picture frame Parameter, the color adjustment parameter of each pixel are used to described image frame being adjusted to high dynamic range;Utilize each picture The color adjustment parameter of vegetarian refreshments adjusts the color of the third picture frame, obtains the 4th picture frame;According to second picture frame With the difference of the 4th picture frame, the model parameter that described image adjusts model is adjusted.
In some embodiments, processor 1202 is further used for: adjusting model based on described image, extracts the third The local feature and global characteristics of picture frame;Determine the regulating degree of each pixel in the third picture frame;According to described The regulating degree of each pixel, the local feature and the global characteristics in three picture frames, determine in the third picture frame The color adjustment parameter of each pixel.
In some embodiments, in order to extract the local feature and global characteristics of third picture frame, processor 1202 into one Step is used for: being adjusted the first deep neural network of model based on described image, is carried out feature extraction to the third picture frame, obtain To the fisrt feature figure for indicating the pictorial feature of third picture frame;The second depth nerve of model is adjusted based on described image Network carries out feature extraction to fisrt feature figure, obtains the second feature for indicating the local feature of the third picture frame Figure;The third deep neural network that model is adjusted based on described image is carried out feature extraction to fisrt feature figure, obtained for table Show the third feature figure of the global characteristics of the third picture frame;The 4th depth nerve net of model is adjusted based on described image Network carries out feature extraction to second feature figure and third feature figure, obtains merging the local feature and the global characteristics Fourth feature figure;
In some embodiments, in order to determine the regulating degree of each pixel in the third picture frame, processor 1202 It is further used for: adjusts the 5th deep neural network of model based on described image, feature is carried out to the third picture frame and is mentioned It takes, obtains the fifth feature figure for indicating the regulating degree to pixel each in third picture frame.In order to determine third image The color adjustment parameter of each pixel in frame, processor 1202 are further used for: utilizing the fourth feature figure and fifth feature Figure, determines the color adjustment parameter of each pixel in the third picture frame.
In some embodiments, processor 1202 is further used for:, will be described for any of the first video picture frame The color space conversion of picture frame is profile connecting space, obtains the 5th picture frame;By the color space of the 5th picture frame Color space corresponding with the colour gamut of high dynamic range is converted to, the 9th picture frame is obtained;According to the 9th picture frame, generate Second video.
In some embodiments, processor 1202 is further used for: carrying out perception quantification manner to the 9th picture frame Photoelectric conversion, obtain the tenth picture frame;The bit depth of the color of tenth picture frame is expressed as and the high dynamic range Corresponding bit depth is enclosed, the 11st picture frame is obtained;The each described 11st will obtained by each picture frame of first video Picture frame is converted to the format of targeted colorspace, obtains second video.
In some embodiments, processor 1202 is further used for:, will for any of first video picture frame The color space conversion of described image frame is RGB color space, obtains the 6th picture frame;By the 6th picture frame red Each color component in turquoise color space is converted to floating number format, obtains the 7th picture frame;To the 7th picture frame into Row electro-optic conversion obtains the 8th picture frame;It is profile connecting space by the color space conversion of the 8th picture frame, obtains institute State the 5th picture frame.
In some embodiments, the processor 1202 is further used for: for any of the second video picture frame, benefit The color adjustment parameter that model determines each pixel in described image frame is adjusted with described image;Utilize each picture in described image frame The color adjustment parameter of vegetarian refreshments adjusts the color of each pixel in described image frame, obtains the third video.To sum up, it handles Device 1202 can be by trained color adaptation model, and carrying out color adaptation to the video content of SDR film source (can also claim For color stretching), to obtain the HDR video with high dynamic range and richer color.In particular, processor 1202 is logical It crosses using the color adaptation model obtained by deep neural network model training, color adaptation effect can be made more to approach Real scene is (that is, the picture that more true reduction is captured.
In addition, each embodiment of the application can pass through the data processing by data processing equipment such as computer execution Program is realized.Obviously, data processor constitutes the application.
In addition, the data processor being commonly stored in one storage medium is situated between by the way that program is directly read out storage It is executed in matter or the storage equipment (such as hard disk and/or memory) by program being installed or being copied to data processing equipment.Cause This, such storage medium also constitutes the application.Any kind of recording mode can be used in storage medium, such as paper is deposited Storage media (such as paper tape), magnetic storage medium (such as floppy disk, hard disk, flash memory), optical storage media (such as CD-ROM), magneto-optic are deposited Storage media (such as MO) etc..
Therefore disclosed herein as well is a kind of non-volatile memory mediums, wherein it is stored with data processor, the data Processing routine is used to execute any one embodiment of the above-mentioned method for processing video frequency of the application.
In addition, method and step described herein is with data processor in addition to that can be realized, can also by hardware Lai It realizes, for example, can be by logic gate, switch, specific integrated circuit (ASIC), programmable logic controller (PLC) and embedding microcontroller etc. To realize.Therefore this hardware that herein described method may be implemented also may be constructed the application.
The foregoing is merely the exemplary embodiments of the application, all the application's not to limit the application Within spirit and principle, any modification, equivalent substitution, improvement and etc. done be should be included within the scope of the application protection.

Claims (15)

1. a kind of method for processing video frequency, which is characterized in that the described method includes:
Obtain the first video of standard dynamic range;
It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in first video, obtains To the second video;And
Based on trained image adjustment model, second video is adjusted to the third video of high dynamic range.
2. the method as described in claim 1, which is characterized in that the method further includes:
Obtain the sample set for training described image to adjust model, wherein any of described sample set sample includes First picture frame of standard dynamic range and the second picture frame of high dynamic range, the image content of the first image frame and The image content of two picture frames is identical;
Using the sample set, training described image adjusts model.
3. method according to claim 2, which is characterized in that described to utilize the sample set, training described image is adjusted Model, comprising:
For the first image frame of any of sample set sample, the color space of the first image frame is turned It is changed to color space corresponding with the colour gamut of high dynamic range, obtains third picture frame;
Model is adjusted using described image, determines the color adjustment parameter of each pixel in the third picture frame, each picture The color adjustment parameter of vegetarian refreshments is used to the third picture frame being adjusted to high dynamic range;
The color that the third picture frame is adjusted using the color adjustment parameter of each pixel, obtains the 4th picture frame;
According to the difference of second picture frame and the 4th picture frame, the model parameter that described image adjusts model is adjusted.
4. method as claimed in claim 3, which is characterized in that it is described to adjust model using described image, determine the third The color adjustment parameter of each pixel in picture frame, comprising:
Model is adjusted based on described image, extracts the local feature and global characteristics of the third picture frame;
Determine the regulating degree of each pixel in the third picture frame;
According to the regulating degree of each pixel, the local feature and the global characteristics in the third picture frame, institute is determined State the color adjustment parameter of each pixel in third picture frame.
5. method as claimed in claim 4, which is characterized in that it is described that model is adjusted based on described image, extract the third The local feature and global characteristics of picture frame, comprising:
The first deep neural network that model is adjusted based on described image is carried out feature extraction to the third picture frame, obtained For indicating the fisrt feature figure of the pictorial feature of third picture frame;
The second deep neural network that model is adjusted based on described image is carried out feature extraction to fisrt feature figure, is used for Indicate the second feature figure of the local feature of the third picture frame;
The third deep neural network that model is adjusted based on described image is carried out feature extraction to fisrt feature figure, is used for Indicate the third feature figure of the global characteristics of the third picture frame;
The 4th deep neural network that model is adjusted based on described image is carried out feature to second feature figure and third feature figure and mentioned It takes, obtains the fourth feature figure for merging the local feature and the global characteristics.
6. method as claimed in claim 5, which is characterized in that
The regulating degree of each pixel in the determination third picture frame, comprising: adjust the of model based on described image Five deep neural networks carry out feature extraction to the third picture frame, obtain for indicating to each pixel in third picture frame The fifth feature figure of the regulating degree of point;
It is described according to the regulating degree of each pixel, the local feature and the global characteristics in the third picture frame, really The color adjustment parameter of each pixel in the fixed third picture frame, comprising: the fourth feature figure and fifth feature figure are utilized, Determine the color adjustment parameter of each pixel in the third picture frame.
7. the method as described in claim 1, which is characterized in that the color space by picture frame in first video turns It is changed to color space corresponding with the colour gamut of high dynamic range, obtains the second video, comprising:
It is profile connecting space by the color space conversion of described image frame for any of first video picture frame, Obtain the 5th picture frame;
It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of the 5th picture frame, obtains the 9th Picture frame;
According to the 9th picture frame, second video is generated.
8. the method for claim 7, which is characterized in that it is described for any of first video picture frame, it will The color space conversion of described image frame is profile connecting space, obtains the 5th picture frame, comprising:
It is RGB color space by the color space conversion of described image frame, obtains the 6th picture frame;
Each color component of 6th picture frame in RGB color space is converted into floating number format, obtains the 7th figure As frame;
Electro-optic conversion is carried out to the 7th picture frame, obtains the 8th picture frame;
It is profile connecting space by the color space conversion of the 8th picture frame, obtains the 5th picture frame.
9. the method for claim 7, which is characterized in that it is described according to the 9th picture frame, generate second view Frequently, comprising:
The photoelectric conversion that perception quantification manner is carried out to the 9th picture frame, obtains the tenth picture frame;
The bit depth of the color of tenth picture frame is expressed as bit depth corresponding with the high dynamic range, obtains the tenth One picture frame;
Each 11st picture frame obtained by each picture frame of first video is converted to the lattice of targeted colorspace Formula obtains second video.
10. the method as described in claim 1, which is characterized in that it is described to be based on trained image adjustment model, it will be described Second video is adjusted to the third video of high dynamic range, comprising:
For any of the second video picture frame, model is adjusted using described image and determines each pixel in described image frame Color adjustment parameter;
Using the color adjustment parameter of each pixel in described image frame, the color of each pixel in described image frame is adjusted, is obtained To the third video.
11. a kind of data processing equipment, characterized by comprising: processor and memory, the processor are used for:
Obtain the first video of standard dynamic range;
It is color space corresponding with the colour gamut of high dynamic range by the color space conversion of picture frame in first video, obtains To the second video;And
Based on trained image adjustment model, second video is adjusted to the third video of high dynamic range.
12. data processing equipment as claimed in claim 11, which is characterized in that the processor is further used for:
Before second video is adjusted to the third video of high dynamic range, obtain for training described image to adjust mould The sample set of type, wherein any of described sample set sample includes that the first picture frame of standard dynamic range and height move The image content of second picture frame of state range, the image content of the first image frame and the second picture frame is identical;
Using the sample set, training described image adjusts model.
13. data processing equipment as claimed in claim 11, which is characterized in that the processor is further used for:
For the first image frame of any of sample set sample, the color space of the first image frame is turned It is changed to color space corresponding with the colour gamut of high dynamic range, obtains third picture frame;
Model is adjusted using described image, determines the color adjustment parameter of each pixel in the third picture frame, each picture The color adjustment parameter of vegetarian refreshments is used to the third picture frame being adjusted to high dynamic range;
The color that the third picture frame is adjusted using the color adjustment parameter of each pixel, obtains the 4th picture frame;
According to the difference of second picture frame and the 4th picture frame, the model parameter that described image adjusts model is adjusted.
14. data processing equipment as claimed in claim 11, which is characterized in that the processor is further used for:
Model is adjusted based on described image, extracts the local feature and global characteristics of the third picture frame;
Determine the regulating degree of each pixel in the third picture frame;
According to the regulating degree of each pixel, the local feature and the global characteristics in the third picture frame, institute is determined State the color adjustment parameter of each pixel in third picture frame.
15. a kind of storage medium, is stored with one or more programs, one or more of programs include instruction, described instruction When being executed by data processing equipment, so that the data processing equipment executes such as side of any of claims 1-10 Method.
CN201811511097.7A 2018-12-11 2018-12-11 Video processing method, data processing apparatus, and storage medium Active CN110149507B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201811511097.7A CN110149507B (en) 2018-12-11 2018-12-11 Video processing method, data processing apparatus, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201811511097.7A CN110149507B (en) 2018-12-11 2018-12-11 Video processing method, data processing apparatus, and storage medium

Publications (2)

Publication Number Publication Date
CN110149507A true CN110149507A (en) 2019-08-20
CN110149507B CN110149507B (en) 2021-05-28

Family

ID=67589316

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201811511097.7A Active CN110149507B (en) 2018-12-11 2018-12-11 Video processing method, data processing apparatus, and storage medium

Country Status (1)

Country Link
CN (1) CN110149507B (en)

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111683269A (en) * 2020-06-12 2020-09-18 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN112019827A (en) * 2020-09-02 2020-12-01 上海网达软件股份有限公司 Method, device, equipment and storage medium for enhancing video image color
CN112488962A (en) * 2020-12-17 2021-03-12 成都极米科技股份有限公司 Method, device, equipment and medium for adjusting picture color based on deep learning
CN113781318A (en) * 2021-08-02 2021-12-10 中国科学院深圳先进技术研究院 Image color mapping method and device, terminal equipment and storage medium
CN114092756A (en) * 2020-08-25 2022-02-25 阿里巴巴集团控股有限公司 Image processing model training method and device
CN114222187A (en) * 2021-08-12 2022-03-22 荣耀终端有限公司 Video editing method and electronic equipment
CN114466244A (en) * 2022-01-26 2022-05-10 新奥特(北京)视频技术有限公司 Ultrahigh-definition high-dynamic-range imaging rendering method and device
CN115293994A (en) * 2022-09-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3671866A (en) * 1971-01-08 1972-06-20 Collins Radio Co Pulse processing circuit having improved range resolution
CN1117228A (en) * 1993-11-15 1996-02-21 美国电报电话公司 Voice activated data rate change in simultaneous voice and data transmission
US20150146107A1 (en) * 2013-11-26 2015-05-28 Apple Inc. Methods to Reduce Bit-Depth Required for Linearizing Data

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3671866A (en) * 1971-01-08 1972-06-20 Collins Radio Co Pulse processing circuit having improved range resolution
CN1117228A (en) * 1993-11-15 1996-02-21 美国电报电话公司 Voice activated data rate change in simultaneous voice and data transmission
US20150146107A1 (en) * 2013-11-26 2015-05-28 Apple Inc. Methods to Reduce Bit-Depth Required for Linearizing Data

Cited By (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111416950A (en) * 2020-03-26 2020-07-14 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111416950B (en) * 2020-03-26 2023-11-28 腾讯科技(深圳)有限公司 Video processing method and device, storage medium and electronic equipment
CN111683269A (en) * 2020-06-12 2020-09-18 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN111683269B (en) * 2020-06-12 2021-08-17 腾讯科技(深圳)有限公司 Video processing method, video processing device, computer equipment and storage medium
CN114092756A (en) * 2020-08-25 2022-02-25 阿里巴巴集团控股有限公司 Image processing model training method and device
CN112019827A (en) * 2020-09-02 2020-12-01 上海网达软件股份有限公司 Method, device, equipment and storage medium for enhancing video image color
CN112488962A (en) * 2020-12-17 2021-03-12 成都极米科技股份有限公司 Method, device, equipment and medium for adjusting picture color based on deep learning
WO2023010750A1 (en) * 2021-08-02 2023-02-09 中国科学院深圳先进技术研究院 Image color mapping method and apparatus, electronic device, and storage medium
CN113781318A (en) * 2021-08-02 2021-12-10 中国科学院深圳先进技术研究院 Image color mapping method and device, terminal equipment and storage medium
CN114222187A (en) * 2021-08-12 2022-03-22 荣耀终端有限公司 Video editing method and electronic equipment
CN114222187B (en) * 2021-08-12 2023-08-29 荣耀终端有限公司 Video editing method and electronic equipment
CN114466244A (en) * 2022-01-26 2022-05-10 新奥特(北京)视频技术有限公司 Ultrahigh-definition high-dynamic-range imaging rendering method and device
CN114466244B (en) * 2022-01-26 2024-06-18 新奥特(北京)视频技术有限公司 Ultrahigh-definition high-dynamic-range imaging rendering method and device
CN115293994A (en) * 2022-09-30 2022-11-04 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium
CN115293994B (en) * 2022-09-30 2022-12-16 腾讯科技(深圳)有限公司 Image processing method, image processing device, computer equipment and storage medium

Also Published As

Publication number Publication date
CN110149507B (en) 2021-05-28

Similar Documents

Publication Publication Date Title
CN110149507A (en) Method for processing video frequency, data processing equipment and storage medium
CN110691277B (en) Video signal processing method and device
TWI735036B (en) Method, apparatus and non-transitory computer-readable storage medium for color transformations in high dynamic range signals
US20220245775A1 (en) Tone mapping method and electronic device
CN105787909B (en) Image procossing for high dynamic range images
TWI624182B (en) Encoding, decoding, and representing high dynamic range images
CN109076231B (en) Method and device for encoding HDR pictures, corresponding decoding method and decoding device
US9710215B2 (en) Maximizing native capability across multiple monitors
JP6891882B2 (en) Image processing device, image processing method, and program
KR20120107429A (en) Zone-based tone mapping
WO2017152398A1 (en) Method and device for processing high dynamic range image
CN111314577B (en) Transformation of dynamic metadata to support alternate tone rendering
CN107852503A (en) Method and apparatus for being coded and decoded to colour picture
CN103209326A (en) PNG (Portable Network Graphic) image compression method
TW201628409A (en) Method and device for decoding a color picture
US20100086226A1 (en) Potential field-based gamut mapping
CN114501023B (en) Video processing method, device, computer equipment and storage medium
CN108986769B (en) Method for maximally restoring content of Rec.2020 color gamut by display equipment with color gamut lower than Rec.2020 color gamut
CN101401108A (en) Motion picture content editing
TW201621812A (en) A method and device for estimating a color mapping between two different color-graded versions of a sequence of pictures
WO2023010751A1 (en) Information compensation method and apparatus for highlighted area of image, device, and storage medium
US20190132600A1 (en) Method and apparatus for encoding/decoding a scalar integer into a parameter representative of a pivot points of a piece-wise linear function
CN104954767B (en) A kind of information processing method and electronic equipment
KR101941231B1 (en) Method for color grading and color correction, and apparatus using the same
CN102750925B (en) In a kind of color oscilloscope, colour model maps the method for three-dimensional space

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant