WO2023160295A1 - 视频处理方法和装置 - Google Patents

视频处理方法和装置 Download PDF

Info

Publication number
WO2023160295A1
WO2023160295A1 PCT/CN2023/071669 CN2023071669W WO2023160295A1 WO 2023160295 A1 WO2023160295 A1 WO 2023160295A1 CN 2023071669 W CN2023071669 W CN 2023071669W WO 2023160295 A1 WO2023160295 A1 WO 2023160295A1
Authority
WO
WIPO (PCT)
Prior art keywords
hdr video
video
brightness
image sequence
hdr
Prior art date
Application number
PCT/CN2023/071669
Other languages
English (en)
French (fr)
Other versions
WO2023160295A9 (zh
Inventor
崔瀚涛
张东
张鹏鹏
莫燕
王晨清
Original Assignee
荣耀终端有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 荣耀终端有限公司 filed Critical 荣耀终端有限公司
Priority to EP23758920.5A priority Critical patent/EP4318383A1/en
Publication of WO2023160295A1 publication Critical patent/WO2023160295A1/zh
Publication of WO2023160295A9 publication Critical patent/WO2023160295A9/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/63Control of cameras or camera modules by using electronic viewfinders
    • H04N23/631Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters
    • H04N23/632Graphical user interfaces [GUI] specially adapted for controlling image capture or setting capture parameters for displaying or modifying preview images prior to image capturing, e.g. variety of image resolutions or capturing parameters
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/741Circuitry for compensating brightness variation in the scene by increasing the dynamic range of the image compared to the dynamic range of the electronic image sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • H04N25/59Control of the dynamic range by controlling the amount of charge storable in the pixel, e.g. modification of the charge conversion ratio of the floating node capacitance

Definitions

  • the present application relates to the technical field of terminals, and in particular to a video processing method and device.
  • the terminal device can obtain HDR video by processing multiple frames of images obtained by the camera.
  • the HDR video can be configured according to static metadata, such as the HDR conversion curve-perceptual quantization (PQ)
  • PQ HDR conversion curve-perceptual quantization
  • the curve is fixedly mapped according to the absolute brightness.
  • the absolute brightness may be a reference display brightness of a display of a terminal device, such as 1000 nits (nit).
  • Embodiments of the present application provide a video processing method and device, so that the first device can match different dynamic metadata for scenes of different brightness corresponding to multi-frame images acquired based on the camera, and use different dynamic metadata to match the multi-frame
  • the images are adjusted respectively to obtain HDR video, and the HDR video is sent to the second device, so that the second device can perform brightness mapping on the HDR video based on the preset brightness of the HDR video, and display video content with appropriate brightness.
  • the embodiment of the present application provides a video processing method, which is applied to a video processing system.
  • the video processing system includes: a first device and a second device, and the method includes: the first device receives an operation of starting shooting in movie mode ; Movie mode is a mode for recording high dynamic range HDR video; in response to the operation of starting shooting, the first device acquires the first image sequence based on the camera; the first image sequence corresponds to the first brightness scene; the first device obtains the first image sequence based on the first The image sequence and the first dynamic metadata corresponding to the first brightness scene are encoded to obtain the first HDR video; the first dynamic metadata includes preset brightness; the second device acquires the first HDR video from the first device; the second The device adjusts the brightness of the first HDR video based on the preset brightness to obtain the second HDR video; the second device plays the second HDR video.
  • the first device can match different dynamic metadata for scenes of different brightness corresponding to multiple frames of images acquired based on the camera, use different dynamic metadata to adjust the multiple frames of images respectively, obtain HDR video, and convert the HDR
  • the video is sent to the second device, so that the second device can perform brightness mapping on the HDR video based on the preset brightness of the HDR video, and display video content with appropriate brightness.
  • the second device adjusts the brightness of the first HDR video based on the preset brightness to obtain the second HDR video, including: the second device determines a brightness ratio; the brightness ratio is the peak brightness of the second device and A ratio between brightnesses is preset; the second device adjusts the brightness of the first HDR video based on the brightness ratio to obtain a second HDR video.
  • the second device can also adjust the brightness of the first HDR video according to the peak brightness supported by the hardware of the second device, so that the adjusted second HDR video has a better playback effect.
  • the method further includes: the first device continues to acquire a second image sequence based on the camera; wherein, the second image sequence corresponds to a second brightness scene; the first brightness scene is different from the second brightness scene; the first The device encodes the first HDR video based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene, including: the first device based on the first image sequence, the second image sequence, and the first dynamic metadata corresponding to the first brightness scene The dynamic metadata and the second dynamic metadata corresponding to the second brightness scene are encoded to obtain the first HDR video. In this way, the first device can match corresponding dynamic metadata for different brightness scenes, and obtain an HDR video based on encoding of different dynamic metadata.
  • the method further includes: the first device encodes the first image sequence Perform image preprocessing to obtain a first image sequence after image preprocessing; the first device performs gamma correction processing on the first image sequence after image preprocessing to obtain a first image sequence after gamma correction processing; the first device Performing 3D lookup table processing on the first image sequence after gamma correction processing to obtain the first image sequence after 3D lookup table processing; wherein, the first image sequence after 3D lookup table processing includes the first image sequence corresponding to First static metadata: the first device encodes the first HDR video based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene, including: the first image sequence processed by the first device based on the 3D lookup table , and the first dynamic metadata corresponding to the first brightness scene are encoded to obtain the first HDR video. In this way, the first device can obtain an HDR video
  • the first HDR video includes: first static metadata and first dynamic metadata.
  • the method further includes: when the second device determines that it supports processing the first static metadata, the second device decodes the second HDR video into the first image sequence, and the first static metadata ; The second device encodes a third HDR video based on the first image sequence and the first static metadata; the second HDR video is different from the third HDR video.
  • the second device can be compatible with dynamic metadata and static metadata, and the second device that does not support dynamic metadata processing can generate HDR video based on static metadata.
  • the type of the first HDR video is HDR10+ video
  • the type of the second HDR video is HDR10+ video
  • the type of the third HDR video is HDR10 video.
  • the first device receiving the operation of starting shooting in the movie mode includes: the first device receives the operation for turning on the movie mode; in response to the operation of turning on the movie mode, the first device displays the first interface; the first interface includes: a control for recording HDR video and a control for enabling shooting; when the state of the control for recording HDR video is off, the first device receives the control for enabling Record the operation of the control to obtain the HDR video; in response to the operation of the control used to record the HDR video, the first device displays the second interface; the second interface includes: a prompt message indicating that the 4K HDR10+ mode has been turned on; When the state of the control for recording the HDR video is on, the first device receives an operation on the control for starting shooting. In this way, the first device can determine whether to capture a 4K HDR video based on the user's flexible operation of the controls used to record the HDR video.
  • the method further includes: when the state of the control for recording the HDR video is on, the first device receives an operation for turning off the control for recording the HDR video; Due to the operation of the controls for recording the HDR video, the first device displays a third interface; the third interface includes: a prompt message indicating that the 4K HDR10+ mode is turned off. In this way, the first device can determine whether the 4K HDR10+ mode is currently enabled according to the prompt information, thereby improving the user experience of the video recording function.
  • the method further includes: the first device receives an operation of turning on the movie mode for the first time; in response to the first operation of turning on the movie mode, the first device displays a fourth interface; in the fourth interface Including: controls for recording HDR video, and a prompt message indicating that 4K HDR10+ video will be recorded after turning on the control for recording HDR video.
  • the user when the user first turns on the movie mode, the user can determine how to turn on the 4K HDR10+ mode based on the guidance of the prompt information, thereby improving the user's experience with the video recording function.
  • the first device receiving the operation of starting shooting in the movie mode includes: the first device receives the operation for turning on the movie mode; in response to the operation of turning on the movie mode, the first device displays the fifth interface; the fifth interface includes: a control for viewing the setting items corresponding to the first application, and a control for starting shooting; the first device receives an operation on the control for viewing the setting items corresponding to the first application; the response For the operation of checking the control of the setting item corresponding to the first application, the first device displays the sixth interface; the sixth interface includes: the first control for recording video with 10-bit HDR in movie mode and switching the video to 4K ; When the state of the first control is on, the first device receives an operation on the control for enabling shooting. In this way, the user can flexibly control the movie HDR function control in the setting function according to the shooting needs, and then realize the recording of HDR10+ video.
  • the first application may be a camera application.
  • the method further includes: the first device receives an operation on the control for viewing the function details in the first application; in response to the operation on the control for viewing the function details in the first application, the first The device displays a seventh interface; wherein, the seventh interface includes: the function details corresponding to the movie mode, and the function details of the movie mode are used to indicate that the movie mode can record 4K HDR10+ video.
  • the seventh interface includes: the function details corresponding to the movie mode, and the function details of the movie mode are used to indicate that the movie mode can record 4K HDR10+ video.
  • the method further includes: the first device receives an operation for opening the second application; in response to the operation of opening the second application, the first device displays an eighth interface; wherein, the eighth interface includes : the first HDR video, and the identification corresponding to the first HDR video; the identification is used to indicate the type of the first HDR video; the first device receives the operation for the first HDR video; in response to the operation for the first HDR video, the first The device displays a ninth interface; the ninth interface includes: an identification. In this way, the user can accurately find the HDR10+ video in the gallery application according to the logo, increasing the convenience for the user to view the HDR10+ video.
  • the second application may be a gallery application program.
  • the method further includes: the second device displays a tenth interface; wherein, the tenth interface includes: The HDR video is the prompt information of the HDR10+ video containing dynamic metadata, the control for allowing to receive the first HDR video, and the control for refusing to receive the first HDR video; the second device receives the control for allowing to receive the first HDR video operation of the control; in response to the operation of the control allowing to receive the first HDR video, the second device displays an eleventh interface; wherein, the eleventh interface includes an indication for playing the first HDR video based on dynamic metadata Prompt information. In this way, the second device can decode and play the HDR10+ video sent by the first device.
  • the embodiment of the present application provides a video processing method, which is applied to the first device, and the method includes: the first device receives the operation of starting shooting in the movie mode; the movie mode is used to record and obtain high dynamic range HDR video mode; in response to the operation of starting shooting, the first device acquires the first image sequence based on the camera; the first image sequence corresponds to the first brightness scene; the first device obtains the first dynamic element based on the first image sequence and the first brightness scene
  • the data is encoded to obtain the first HDR video; the first dynamic metadata includes preset brightness; the first device sends the first HDR video to the second device.
  • the first device can match different dynamic metadata for scenes of different brightness corresponding to multiple frames of images acquired based on the camera, and use different dynamic metadata to adjust the multiple frames of images respectively to obtain HDR videos.
  • the method further includes: the first device continues to acquire a second image sequence based on the camera; wherein, the second image sequence corresponds to a second brightness scene; the first brightness scene is different from the second brightness scene; the first The device encodes the first HDR video based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene, including: the first device based on the first image sequence, the second image sequence, and the first dynamic metadata corresponding to the first brightness scene The dynamic metadata and the second dynamic metadata corresponding to the second brightness scene are encoded to obtain the first HDR video. In this way, the first device can match corresponding dynamic metadata for different brightness scenes, and obtain an HDR video based on encoding of different dynamic metadata.
  • the method further includes: the first device encodes the first image sequence Perform image preprocessing to obtain a first image sequence after image preprocessing; the first device performs gamma correction processing on the first image sequence after image preprocessing to obtain a first image sequence after gamma correction processing; the first device Performing 3D lookup table processing on the first image sequence after gamma correction processing to obtain the first image sequence after 3D lookup table processing; wherein, the first image sequence after 3D lookup table processing includes the first image sequence corresponding to First static metadata: the first device encodes the first HDR video based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene, including: the first image sequence processed by the first device based on the 3D lookup table , and the first dynamic metadata corresponding to the first brightness scene are encoded to obtain the first HDR video. In this way, the first device can obtain an HDR video
  • the first HDR video includes first static metadata and first dynamic metadata.
  • the embodiment of the present application provides a video processing method applied to the second device, the method includes: the second device obtains the first HDR video from the first device; wherein, the first HDR video includes the first dynamic metadata and a first image sequence; the first dynamic metadata includes preset brightness; the second device adjusts the brightness of the first HDR video based on the preset brightness to obtain a second HDR video; the second device plays the second HDR video.
  • the second device can receive the HDR video from the first device, and perform brightness mapping on the HDR video based on the preset brightness of the HDR video, and then display video content with appropriate brightness.
  • the second device adjusts the brightness of the first HDR video based on the preset brightness to obtain the second HDR video, including: the second device determines a brightness ratio; the brightness ratio is the peak brightness of the second device and A ratio between brightnesses is preset; the second device adjusts the brightness of the first HDR video based on the brightness ratio to obtain a second HDR video.
  • the second device can also adjust the brightness of the first HDR video according to the peak brightness supported by the hardware of the second device, so that the adjusted second HDR video has a better playback effect.
  • the first HDR video includes first static metadata and first dynamic metadata.
  • the method further includes: when the second device determines that it supports processing the first static metadata, the second device decodes the second HDR video into the first image sequence, and the first static metadata ; The second device encodes a third HDR video based on the first image sequence and the first static metadata; the second HDR video is different from the third HDR video.
  • the second device can also adjust the brightness of the first HDR video according to the peak brightness supported by the hardware of the second device, so that the adjusted second HDR video has a better playback effect.
  • the type of the first HDR video is HDR10+ video
  • the type of the second HDR video is HDR10+ video
  • the type of the third HDR video is HDR10 video.
  • the embodiment of the present application provides a video processing device, the method includes: the processing unit of the first device is configured to receive an operation to start shooting in movie mode; the movie mode is used to record and obtain high dynamic range HDR video mode; in response to the operation of starting the shooting, the processing unit of the first device is used to acquire the first image sequence based on the camera; the first image sequence corresponds to the first brightness scene; the processing unit of the first device is used to obtain the first image sequence based on the first image sequence , and the first dynamic metadata corresponding to the first brightness scene are encoded to obtain the first HDR video; the first dynamic metadata includes preset brightness; the communication unit of the second device is used to obtain the first HDR video from the first device Video: the processing unit of the first device is configured to adjust the brightness of the first HDR video based on preset brightness to obtain a second HDR video; the processing unit of the second device is configured to play the second HDR video.
  • the processing unit of the second device is specifically configured to determine a brightness ratio; the brightness ratio is the ratio between the peak brightness of the second device and a preset brightness; the processing unit of the second device is specifically configured to use
  • the second HDR video is obtained by adjusting the brightness of the first HDR video based on the brightness ratio.
  • the processing unit of the first device is further configured to continuously acquire a second image sequence based on the camera; wherein, the second image sequence corresponds to a second brightness scene; the first brightness scene is different from the second brightness scene ;
  • the processing unit of the first device is further configured to encode and obtain the first dynamic metadata corresponding to the first image sequence, the second image sequence, the first brightness scene, and the second dynamic metadata corresponding to the second brightness scene One HDR video.
  • the processing unit of the first device is further configured to: perform image pre-processing on the first image sequence to obtain the first image sequence after image pre-processing; Perform gamma correction processing on the sequence to obtain the first image sequence after gamma correction processing; perform 3D lookup table processing on the first image sequence after gamma correction processing to obtain the first image sequence after 3D lookup table processing; wherein , the first image sequence processed by the 3D lookup table includes the first static metadata corresponding to the first image sequence; the first image sequence processed based on the 3D lookup table and the first dynamic metadata encoding corresponding to the first brightness scene Get the first HDR video.
  • the first HDR video includes: first static metadata and first dynamic metadata.
  • the processing unit of the second device when the second device determines that the processing of the first static metadata is supported, the processing unit of the second device is further configured to decode the second HDR video into the first image sequence, and the first Static metadata: the processing unit of the second device is further configured to encode a third HDR video based on the first image sequence and the first static metadata; the second HDR video is different from the third HDR video.
  • the type of the first HDR video is HDR10+ video
  • the type of the second HDR video is HDR10+ video
  • the type of the third HDR video is HDR10 video.
  • the processing unit of the first device is specifically configured to receive an operation for turning on the movie mode; in response to the operation of turning on the movie mode, the display unit of the first device is specifically configured to display the first interface ;
  • the first interface includes: a control for recording HDR video and a control for starting shooting; when the state of the control for recording HDR video is off, the processing unit of the first device also specifically uses In response to receiving an operation for enabling the control for recording HDR video; in response to the operation of the control for recording HDR video, the display unit of the first device is also specifically used to display the second interface; the second interface includes : A prompt message indicating that the 4K HDR10+ mode has been turned on; when the state of the control used to record the HDR video is turned on, the processing unit of the first device is specifically used to receive an operation on the control used to start shooting.
  • the processing unit of the first device when the state of the control for recording the HDR video is on, is further configured to receive an operation for turning off the control for recording the HDR video; In response to the operation of the control for recording the HDR video, the display unit of the first device is further configured to display a third interface; the third interface includes: a prompt message indicating that the 4K HDR10+ mode has been turned off.
  • the processing unit of the first device is further configured to receive an operation of turning on the movie mode for the first time; in response to the operation of turning on the movie mode for the first time, the display unit of the first device is also configured to The fourth interface is displayed; the fourth interface includes: a control for recording HDR video, and a prompt message for indicating that 4K HDR10+ video will be recorded after the control for recording HDR video is turned on.
  • the processing unit of the first device is specifically configured to receive an operation for turning on the movie mode; in response to the operation of turning on the movie mode, the display unit of the first device is specifically configured to display the fifth interface ;
  • the fifth interface includes: a control for viewing the setting items corresponding to the first application, and a control for starting shooting; The operation of the control of the item; in response to the operation of viewing the control of the setting item corresponding to the first application, the display unit of the first device is also specifically used to display the sixth interface; the sixth interface includes: used in the movie mode.
  • the processing unit of the first device is further configured to receive an operation on a control for viewing function details in the first application; responding to the operation of the control for viewing function details in the first application , the display unit of the first device is also used to display a seventh interface; wherein, the seventh interface includes: the function details corresponding to the movie mode, and the function details of the movie mode are used to indicate that the movie mode can record 4K HDR10+ video.
  • the processing unit of the first device is further configured to receive an operation for opening the second application; in response to the operation of opening the second application, the display unit of the first device is further configured to display the operation of the second application.
  • Eight interfaces wherein, the eighth interface includes: the first HDR video and the identification corresponding to the first HDR video; the identification is used to indicate the type of the first HDR video; the processing unit of the first device is also used to receive the first HDR video Operation of the HDR video; in response to the operation on the first HDR video, the display unit of the first device is further configured to display a ninth interface; the ninth interface includes: an identification.
  • the display unit of the second device is further configured to display a tenth interface; wherein, the tenth interface includes: prompt information for indicating that the first HDR video is an HDR10+ video containing dynamic metadata , a control for allowing reception of the first HDR video, and a control for refusing to receive the first HDR video; the processing unit of the second device is also used for receiving an operation on the control for allowing reception of the first HDR video; a response For the operation of the control allowing to receive the first HDR video, the processing unit of the second device is further configured to display an eleventh interface; wherein, the eleventh interface includes instructions for playing the first HDR video based on dynamic metadata. prompt information.
  • the embodiment of the present application provides a video processing device, a processing unit, which is used to receive an operation to start shooting in movie mode; movie mode is a mode for recording high dynamic range HDR video; responding to the start of shooting The operation, the processing unit, is also used to acquire the first image sequence based on the camera; the first image sequence corresponds to the first brightness scene; the processing unit is also used to obtain the first dynamic metadata based on the first image sequence and the first brightness scene
  • the first HDR video is obtained by encoding; the first dynamic metadata includes preset brightness; the communication unit is also used to send the first HDR video to the second device.
  • the processing unit is further configured to continue to acquire a second image sequence based on the camera; wherein, the second image sequence corresponds to a second brightness scene; the first brightness scene is different from the second brightness scene; the processing unit, It is also used to encode and obtain the first HDR video based on the first image sequence, the second image sequence, the first dynamic metadata corresponding to the first brightness scene, and the second dynamic metadata corresponding to the second brightness scene.
  • the processing unit is further configured to: perform image pre-processing on the first image sequence to obtain the first image sequence after image pre-processing; perform image pre-processing on the first image sequence Perform gamma correction processing to obtain the first image sequence after gamma correction processing; perform 3D lookup table processing on the first image sequence after gamma correction processing to obtain the first image sequence after 3D lookup table processing; wherein,
  • the first image sequence processed by the 3D lookup table includes the first static metadata corresponding to the first image sequence; the first image sequence processed by the 3D lookup table and the first dynamic metadata corresponding to the first brightness scene are encoded to obtain First HDR video.
  • the first HDR video includes first static metadata and first dynamic metadata.
  • the embodiment of the present application provides a video processing apparatus, a communication unit, configured to acquire a first HDR video from a first device; wherein, the first HDR video includes first dynamic metadata and a first image sequence
  • the first dynamic metadata includes a preset brightness; the processing unit is used to adjust the brightness of the first HDR video based on the preset brightness to obtain a second HDR video; the processing unit is also used to play the second HDR video.
  • the processing unit is specifically configured to determine the brightness ratio of the second device; the brightness ratio is the ratio between the peak brightness of the second device and a preset brightness; the processing unit is further specifically configured to determine the brightness ratio based on the brightness ratio The brightness of the first HDR video is adjusted to obtain the second HDR video.
  • the first HDR video includes first static metadata and first dynamic metadata.
  • the processing unit when the second device determines that it supports processing the first static metadata, the processing unit is further configured to decode the second HDR video into the first image sequence and the first static metadata; The processing unit is further configured to encode a third HDR video based on the first image sequence and the first static metadata; the second HDR video is different from the third HDR video.
  • the type of the first HDR video is HDR10+ video
  • the type of the second HDR video is HDR10+ video
  • the type of the third HDR video is HDR10 video.
  • the embodiment of the present application provides a video processing device, including a processor and a memory, the memory is used to store code instructions; the processor is used to run the code instructions, so that the terminal device can perform any of the first aspect or the first aspect
  • the video processing method described in an implementation manner, or execute the video processing method as described in the second aspect or any implementation manner of the second aspect, or execute the video processing method as described in the third aspect or any implementation manner of the third aspect The video processing method described in .
  • the embodiment of the present application provides a computer-readable storage medium, the computer-readable storage medium stores instructions, and when the instructions are executed, the computer executes the computer according to the first aspect or any implementation manner of the first aspect.
  • the described video processing method or execute the video processing method described in the second aspect or any implementation manner of the second aspect, or execute the video processing method described in the third aspect or any implementation manner of the third aspect method.
  • a computer program product includes a computer program, and when the computer program is run, the computer executes the video processing method described in the first aspect or any implementation manner of the first aspect, or executes the video processing method described in the second aspect The video processing method described in any implementation manner of the third aspect or the second aspect, or execute the video processing method described in the third aspect or any implementation manner of the third aspect.
  • FIG. 1 is a schematic diagram of a merge and DCG principle provided by the embodiment of the present application.
  • FIG. 2 is a schematic diagram of a scene provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of a first device (or a second device) provided by an embodiment of the present application;
  • FIG. 4 is a software structural block diagram of a first device provided in an embodiment of the present application.
  • FIG. 5 is a schematic diagram of an interface for starting shooting in movie mode provided by an embodiment of the present application.
  • FIG. 6 is a schematic diagram of another interface for starting shooting in movie mode provided by the embodiment of the present application.
  • FIG. 7 is a schematic diagram of an interface for viewing function details provided by an embodiment of the present application.
  • FIG. 8 is a schematic flowchart of a video processing method provided in an embodiment of the present application.
  • FIG. 9 is a schematic diagram of an image sequence and brightness scene provided by an embodiment of the present application.
  • FIG. 10 is a schematic diagram of an interface for viewing HDR10+ video provided by the embodiment of the present application.
  • FIG. 11 is a schematic diagram of a device sharing interface provided by an embodiment of the present application.
  • FIG. 12 is a schematic diagram of an interface for displaying prompt information provided by an embodiment of the present application.
  • Fig. 13 is a schematic flow chart of playing HDR10+ video provided by the embodiment of the present application.
  • FIG. 14 is a schematic flowchart of another video processing method provided by the embodiment of the present application.
  • FIG. 15 is a schematic structural diagram of a video processing device provided by an embodiment of the present application.
  • FIG. 16 is a schematic diagram of a hardware structure of another terminal device provided in an embodiment of the present application.
  • FIG. 17 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • This application relates to the field of photography. In order to facilitate the understanding of the method provided in this application, some terms in the field of photography are introduced below.
  • Binning is an image readout mode in which the charges induced in adjacent pixels are added together and read out in a pixel mode. For example, when an electronic device captures an image, light reflected by a target object is collected by a camera, so that the reflected light is transmitted to an image sensor.
  • the image sensor includes a plurality of photosensitive elements, and the charge collected by each photosensitive element is a pixel, and a binning operation is performed on the pixel information.
  • binning can combine n ⁇ n pixels into one pixel.
  • binning can combine adjacent 2 ⁇ 2 pixels into one pixel, that is, the colors of adjacent 2 ⁇ 2 pixels are presented in the form of one pixel.
  • FIG. 1 is a schematic diagram of a merge and DCG principle provided by the embodiment of the present application.
  • binning can realize the synthesis of adjacent 2 ⁇ 2 pixels into one pixel, so that the image sensor can combine the 4 ⁇ 4 image into a 2 ⁇ 2 image, and output the 2 ⁇ 2 image as an image sensor based on binning.
  • An image sensor with dual conversion gain DCG capability one pixel has two potential wells, the two potential wells correspond to different full well capacities and different conversion gains CG, and the large full well capacity corresponds to low conversion gain (LCG) , Low sensitivity, small full well capacity corresponds to high conversion gain (high conversion gain, HCG), high sensitivity.
  • the sensor can use two potential wells (two sensitivities) and two conversion gains in the same scene, and acquire two images with one exposure: an image in high-sensitivity mode and an image in low-sensitivity mode. Then the electronic equipment combines the two acquired images into one image, which is HDR technology.
  • the image sensor can further use two kinds of conversion gains, for example, based on HCG and LCG respectively, the output values under the two conversion gains can be obtained.
  • image data, the HCG-based image output data and the LCG-based image output data are fused to obtain a fused image, and the fused image is output as an image sensor based on DCG.
  • 3D LUT technology is a color correction tool to restore log video color. Traditional filters adjust parameters such as exposure and color temperature. 3D LUT can realize the mapping and transformation of RGB colors in the original material, so that based on 3D LUT technology, more Rich shade.
  • HDR10 video is configured according to static metadata, for example, the HDR10 conversion curve PQ curve is fixedly mapped according to the benchmark display brightness of the display.
  • the bit depth of the HDR10 video is 10 bits; the static metadata can meet the definition in SMPTE ST 2086 or other standards.
  • HDR10+ continues to improve on the basis of HDR10.
  • HDR10+ supports dynamic metadata, that is, HDR10+ can adjust or enhance image brightness, contrast, and color saturation according to different scenes in the video, so that each frame of the HDR10+ video has an independent adjustment. HDR effect.
  • the bit depth of the HDR10+ video is 12 bits; the dynamic metadata can meet the definition in SMPTE ST 2094 or other standards.
  • the brightness scene may be used to distinguish brightness corresponding to different image frames.
  • the brightness scene may include: a high brightness scene, a medium brightness scene, a dark light scene, and the like.
  • the brightness scenes may correspond to different brightness ranges, for example, the first device may distinguish different brightness scenes based on light intensity (or illuminance) and the like.
  • the brightness range corresponding to a bright scene may be greater than 50000 lux (lux)
  • the brightness range corresponding to a medium brightness scene may be 50000 lux-10 lux
  • the brightness range corresponding to a dark scene may be 10 lux-0 lux.
  • the brightness scenarios described in the embodiments of the present application may not be limited to the above three types; and, the brightness ranges corresponding to the three brightness scenarios are only used as an example, and the brightness ranges corresponding to different brightness scenarios are selected as follows: The value may also be other values, which are not limited in this embodiment of the present application.
  • words such as “first” and “second” are used to distinguish the same or similar items with basically the same function and effect.
  • the first value and the second value are only used to distinguish different values, and their sequence is not limited.
  • words such as “first” and “second” do not limit the quantity and execution order, and words such as “first” and “second” do not necessarily limit the difference.
  • At least one means one or more, and “multiple” means two or more.
  • “And/or” describes the association relationship of associated objects, indicating that there may be three types of relationships, for example, A and/or B, which can mean: A exists alone, A and B exist at the same time, and B exists alone, where A, B can be singular or plural.
  • the character “/” generally indicates that the contextual objects are an “or” relationship.
  • “At least one of the following" or similar expressions refer to any combination of these items, including any combination of single or plural items.
  • At least one item (piece) of a, b, or c can represent: a, b, c, a and b, a and c, b and c, or a, b and c, wherein a, b, c can be single or multiple.
  • FIG. 2 is a schematic diagram of a scenario provided by an embodiment of the present application.
  • the scene may include a first device 201 and a second device 202 .
  • the first device 201 is a mobile phone and the second device 202 is a tablet as an example for illustration, and this example does not constitute a limitation to the embodiment of the present application.
  • the first device 201 can be used to record video with a camera, and send the video content to the second device 202, so that the second device 202 can be used to play the video using a display screen.
  • the first device 201 can use the magic-log technology to maximize the retention of the dynamic range information of the picture captured by the camera sensor, and convert the dynamic range information through the 3D LUT technology For HDR video with different color styles.
  • the video may be an HDR10 video supporting BT.2020 wide color gamut.
  • the first device 201 may send the HDR10 video to the second device 202 .
  • the HDR10 conversion curve PQ curve is fixedly mapped according to the absolute brightness, such as the absolute brightness can be the first
  • the reference display brightness of the display of the second device 202 is, for example, 1000 nit. Therefore, when the HDR10 video is displayed on a second device with a peak brightness of 1000 nit, the PQ curve can well present a normal brightness mapping within 1000 nit. Wherein, the peak brightness may be understood as the highest brightness supported by the hardware of the second device.
  • the peak brightness that the hardware of the second device 202 can support does not reach 1000 nit, for example, when the peak brightness that the hardware of the second device 202 can support is 500 nit, then the HDR10 video with a benchmark display brightness of 1000 nit is at the peak brightness that the hardware can support.
  • the second device 202 cannot implement brightness mapping for highlighted scenes with a brightness of more than 500 nit and less than 1000 nit, resulting in the loss of highlight information in the highlighted scene.
  • the peak brightness supported by the hardware of the second device 202 can be It affects the display of HDR10 video based on the PQ curve.
  • an embodiment of the present application provides a video processing method, so that the first device can match different dynamic metadata for scenes of different brightness corresponding to multiple frames of images acquired based on the camera, and use different dynamic metadata to match the
  • the multi-frame images are adjusted separately to obtain the HDR video, and the HDR video is sent to the second device, so that the second device can perform brightness mapping on the HDR video based on the HDR video and the peak brightness that the hardware of the second device can support, and Display video content with appropriate brightness.
  • first device may also be called a terminal, a user equipment (user equipment, UE), a mobile station (mobile station, MS), a mobile terminal (mobile terminal, MT) )wait.
  • the first device (or the second device) can be a mobile phone (mobile phone), a smart TV, a wearable device, a tablet computer (Pad), a computer with a wireless transceiver function, or a virtual reality that supports a video recording function (or a video playback function).
  • VR virtual reality
  • AR augmented reality
  • Wireless terminals in industrial control industrial control
  • wireless terminals in self-driving self-driving
  • remote surgery remote surgery
  • Wireless terminals in smart grid wireless terminals in transportation safety
  • wireless terminals in smart city wireless terminals in smart home, etc.
  • the embodiments of the present application do not limit the specific technology and specific device form adopted by the first device (or the second device).
  • FIG. 3 is a schematic structural diagram of a first device (or a second device) provided in an embodiment of the present application.
  • the first device may include a processor 110, an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, and an antenna 1 , antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, indicator 192, camera 193, and display screen 194, etc.
  • a processor 110 an external memory interface 120, an internal memory 121, a universal serial bus (universal serial bus, USB) interface 130, a charging management module 140, a power management module 141, and an antenna 1 , antenna 2, mobile communication module 150, wireless communication module 160, audio module 170, speaker 170A, receiver 170B, microphone 170C, earphone jack 170D, sensor module 180, button 190, indicator 192, camera 193, and display screen 194, etc.
  • USB universal serial bus
  • the structure shown in the embodiment of the present application does not constitute a specific limitation on the first device (or the second device).
  • the first device (or the second device) may include more or fewer components than those shown in the illustrations, or combine some components, or separate some components, or arrange different components.
  • the illustrated components can be implemented in hardware, software or a combination of software and hardware.
  • Processor 110 may include one or more processing units. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • a memory may also be provided in the processor 110 for storing instructions and data.
  • the processor 110 may include one or more processing units, for example: the processor 110 may include an application processor (application processor, AP), a modem processor, a graphics processing unit (graphics processing unit, GPU), an image signal processor (image signal processor, ISP), controller, video codec, digital signal processor (digital signal processor, DSP), baseband processor, and/or neural network processor (neural-network processing unit, NPU), etc. Wherein, different processing units may be independent devices, or may be integrated in one or more processors.
  • application processor application processor, AP
  • modem processor graphics processing unit
  • GPU graphics processing unit
  • image signal processor image signal processor
  • ISP image signal processor
  • controller video codec
  • digital signal processor digital signal processor
  • baseband processor baseband processor
  • neural network processor neural-network processing unit
  • the USB interface 130 is an interface conforming to the USB standard specification, specifically, it can be a Mini USB interface, a Micro USB interface, a USB Type C interface, and the like.
  • the USB interface 130 can be used to connect a charger to charge the first device (or the second device), and can also be used to transmit data between the first device (or the second device) and peripheral devices. It can also be used to connect headphones and play audio through them.
  • the interface can also be used to connect other first devices (or second devices), such as AR devices.
  • the charging management module 140 is configured to receive a charging input from a charger.
  • the charger may be a wireless charger or a wired charger.
  • the power management module 141 is used for connecting the charging management module 140 and the processor 110 .
  • the wireless communication function of the first device can be realized by the antenna 1, the antenna 2, the mobile communication module 150, the wireless communication module 160, the modem processor and the baseband processor.
  • Antenna 1 and Antenna 2 are used to transmit and receive electromagnetic wave signals.
  • Antennas in the first device (or second device) may be used to cover single or multiple communication frequency bands. Different antennas can also be multiplexed to improve the utilization of the antennas.
  • the mobile communication module 150 may provide wireless communication solutions including 2G/3G/4G/5G applied on the first device (or the second device).
  • the mobile communication module 150 may include at least one filter, switch, power amplifier, low noise amplifier (low noise amplifier, LNA) and the like.
  • the mobile communication module 150 can receive electromagnetic waves through the antenna 1, filter and amplify the received electromagnetic waves, and send them to the modem processor for demodulation.
  • the wireless communication module 160 can provide wireless local area network (wireless local area networks, WLAN) (such as wireless fidelity (wireless fidelity, Wi-Fi) network), bluetooth (bluetooth, BT) applied on the first device (or second device). ), global navigation satellite system (global navigation satellite system, GNSS), frequency modulation (frequency modulation, FM) and other wireless communication solutions.
  • WLAN wireless local area networks
  • WLAN wireless local area networks
  • Wi-Fi wireless fidelity
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • BT Bluetooth
  • global navigation satellite system global navigation satellite system
  • FM frequency modulation
  • the first device realizes the display function through the GPU, the display screen 194, and the application processor.
  • the GPU is a microprocessor for image processing, and is connected to the display screen 194 and the application processor. GPUs are used to perform mathematical and geometric calculations for graphics rendering.
  • the display screen 194 is used to display images, videos and the like.
  • the display screen 194 includes a display panel.
  • the first device (or the second device) may include 1 or N display screens 194, where N is a positive integer greater than 1.
  • the first device (or the second device) can realize the shooting function through ISP, camera 193 , video codec, GPU, display screen 194 and application processor.
  • the ISP is used for processing the data fed back by the camera 193 .
  • the light is transmitted to the photosensitive element of the camera through the lens, and the optical signal is converted into an electrical signal, and the photosensitive element of the camera transmits the electrical signal to the ISP for processing, and converts it into an image visible to the naked eye.
  • ISP can also perform algorithm optimization on image noise, brightness, and skin color.
  • ISP can also optimize the exposure, color temperature and other parameters of the shooting scene.
  • the ISP may be located in the camera 193 .
  • Camera 193 is used to capture still images or video.
  • the object generates an optical image through the lens and projects it to the photosensitive element.
  • the photosensitive element may be a charge coupled device (CCD) or a complementary metal-oxide-semiconductor (CMOS) phototransistor.
  • CMOS complementary metal-oxide-semiconductor
  • the photosensitive element converts the light signal into an electrical signal, and then transmits the electrical signal to the ISP to convert it into a digital image signal.
  • the ISP outputs the digital image signal to the DSP for processing.
  • DSP converts digital image signals into standard RGB, YUV and other image signals.
  • Digital signal processors are used to process digital signals. In addition to digital image signals, they can also process other digital signals. For example, when the first device (or the second device) selects a frequency point, the digital signal processor is used to perform Fourier transform on the energy of the frequency point.
  • Video codecs are used to compress or decompress digital video.
  • the first device may support one or more video codecs.
  • the first device or the second device
  • Camera 193 is used to capture still images or video.
  • the first device (or the second device) may include 1 or N cameras 193, where N is a positive integer greater than 1.
  • the external memory interface 120 may be used to connect an external memory card, such as a Micro SD card, to expand the storage capacity of the first device (or the second device).
  • the external memory card communicates with the processor 110 through the external memory interface 120 to implement a data storage function. Such as saving music, video and other files in the external memory card.
  • the internal memory 121 may be used to store computer-executable program codes including instructions.
  • the internal memory 121 may include an area for storing programs and an area for storing data.
  • the first device can implement audio functions through the audio module 170, the speaker 170A, the receiver 170B, the microphone 170C, the earphone interface 170D, and the application processor. Such as music playback, recording, etc.
  • the audio module 170 is used to convert digital audio information into analog audio signal output, and is also used to convert analog audio input into digital audio signal.
  • Speaker 170A also referred to as a "horn” is used to convert audio electrical signals into sound signals.
  • the first device (or the second device) may listen to music through speaker 170A, or listen to a hands-free call.
  • Receiver 170B also called “earpiece”, is used to convert audio electrical signals into sound signals. When the first device (or the second device) answers a phone call or voice message, it can listen to the voice by placing the receiver 170B close to the human ear.
  • the earphone interface 170D is used for connecting wired earphones.
  • the microphone 170C also called “microphone” or “microphone”, is used to convert sound signals into electrical signals.
  • the sensor module 180 may include one or more of the following sensors, for example: pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensors, or bone conduction sensors, etc. (not shown in Figure 3).
  • sensors for example: pressure sensor, gyroscope sensor, air pressure sensor, magnetic sensor, acceleration sensor, distance sensor, proximity light sensor, fingerprint sensor, temperature sensor, touch sensor, ambient light sensors, or bone conduction sensors, etc. (not shown in Figure 3).
  • the keys 190 include a power key, a volume key and the like.
  • the key 190 may be a mechanical key. It can also be a touch button.
  • the first device (or the second device) can receive key input, and generate key signal input related to user settings and function control of the first device (or second device).
  • the indicator 192 can be an indicator light, and can be used to indicate charging status, power change, and can also be used to indicate messages, missed calls, notifications, and the like.
  • the software system of the first device may adopt a layered architecture, an event-driven architecture, a micro-kernel architecture, a micro-service architecture, or a cloud architecture, etc., which will not be repeated here.
  • FIG. 4 is a software structural block diagram of a first device provided in an embodiment of the present application.
  • the layered architecture divides the software into several layers, and each layer has a clear role and division of labor. Layers communicate through software interfaces.
  • the Android system is divided into four layers, which are respectively an application program layer, an application program framework layer, a hardware abstraction layer (hardware abstraction layer, HAL) layer, and a kernel layer from top to bottom.
  • layers which are respectively an application program layer, an application program framework layer, a hardware abstraction layer (hardware abstraction layer, HAL) layer, and a kernel layer from top to bottom.
  • HAL hardware abstraction layer
  • the application layer can consist of a series of application packages. As shown in FIG. 4 , the application package may include one or more of the following, for example: application programs such as camera, setting, map, or music.
  • the application framework layer provides an application programming interface (application programming interface, API) and a programming framework for applications in the application layer.
  • the application framework layer includes some predefined functions.
  • the application framework layer may include: a media framework module, a window manager, and the like.
  • the media frame module is used to encode the multi-frame images obtained based on the camera driver to obtain video; or, the media frame module can also be used to decode the received video to obtain multi-frame images and metadata corresponding to the multi-frame images, Such as dynamic metadata or static metadata.
  • a window manager is used to manage window programs.
  • the window manager can obtain the size of the display screen, determine whether there is a status bar, lock the screen, touch the screen, drag the screen, capture the screen, etc.
  • the application framework layer may further include: a notification manager, a content provider, a resource manager, and a view system.
  • the notification manager enables the application to display notification information in the status bar, which can be used to convey notification-type messages, and can automatically disappear after a short stay without user interaction.
  • the notification manager is used to notify the download completion, message reminder, etc.
  • the notification manager can also be a notification that appears on the top status bar of the system in the form of a chart or scroll bar text, such as a notification of an application running in the background, or a notification that appears on the screen in the form of a dialog window. For example, prompting text information in the status bar, making a sound, vibrating the device, and flashing the indicator light, etc.
  • Content providers are used to store and retrieve data and make it accessible to applications.
  • Data can include videos, images, audio, calls made and received, browsing history and bookmarks, phonebook, etc.
  • the resource manager provides various resources for the application, such as localized strings, icons, pictures, layout files, video files, and so on.
  • the view system includes visual controls, such as controls for displaying text, controls for displaying pictures, and so on.
  • the view system can be used to build applications.
  • a display interface can consist of one or more views.
  • a display interface including a text message notification icon may include a view for displaying text and a view for displaying pictures.
  • the purpose of the hardware abstraction layer is to abstract the hardware, which can provide a unified interface for querying hardware devices for upper-layer applications, such as interfaces that follow the hardware abstraction layer interface description language (HAL interface definition language, HIDL) protocol.
  • HAL interface definition language HAL interface definition language
  • the hardware abstraction layer may include: a frame-by-frame statistics module, a codec, and so on.
  • the frame-by-frame statistics module is used to perform frame-by-frame statistics on the multi-frame images driven by the camera, determine the brightness scenes corresponding to the multi-frame images, and match the corresponding tone mapping curves to obtain the dynamic metadata corresponding to the multi-frame images .
  • the codec is used to store the result of encoding or decoding via the media framework module. For example, when the codec receives video sent via the media framework module, the codec can save the video as required.
  • the hardware abstraction layer may also include: an audio interface, a video interface, a call interface, and a global positioning system (GPS) interface (not shown in FIG. 4 ), which is not limited in the embodiments of the present application. .
  • GPS global positioning system
  • the kernel layer is the layer between hardware and software.
  • the kernel layer includes at least a display driver, a camera driver, an audio driver, and a sensor driver.
  • the touch sensor receives the user's touch operation for the movie mode in the camera application
  • the corresponding hardware interrupt is sent to the kernel layer
  • the kernel layer processes the touch operation into an original input event (including touch coordinates, the timestamp of the touch operation etc.), raw input events are stored in the kernel layer.
  • the application framework layer obtains the original input event from the kernel layer, and identifies the control corresponding to the input event.
  • the camera application calls the interface of the application framework layer to start the camera application.
  • the camera application sends the instruction for encoding the image sequence to the camera driver in the kernel layer through the media framework module in the application framework layer and the frame-by-frame statistics module in the hardware abstraction layer, and the camera driver passes the camera Capture a sequence of images.
  • the camera driver sends the acquired image sequence to the frame-by-frame statistics module, so that the frame-by-frame statistics module can perform statistics on the acquired image sequence, and determine the respective brightness scenes corresponding to the multi-frame images, and match the corresponding tone mapping curve , to obtain dynamic metadata corresponding to the multi-frame images respectively.
  • the frame-by-frame statistics module may send multiple frames of images and dynamic metadata corresponding to the multiple frames of images to the media framework module.
  • the media framework module performs encoding based on the multi-frame images and the dynamic metadata corresponding to the multi-frame images respectively, to obtain the HDR video.
  • the media framework module may send the HDR video to the codec in the hardware abstraction layer for storage, so that the first device may implement processing and recording of the HDR video.
  • the first device may acquire HDR10+ video in two user triggering manners.
  • the first device can enable the user to enable the 4K HDR function in the movie mode of the camera application, and then when the first device receives the user's operation to enable recording, the first device can record an HDR10+ video based on the 4K HDR function (as shown in the figure 5 corresponding embodiment); or, the first device can enable the user to enable the movie HDR in the setting interface of the camera application, and then when the first device receives the user's operation of enabling recording, the first device can record based on the 4K HDR function Get HDR10+ video (as shown in the corresponding embodiment in Figure 6).
  • the first device can enable the user to enable the 4K HDR function in the movie mode of the camera application, and then when the first device receives the user's operation to enable recording, the first device can record HDR10+ video based on the 4K HDR function .
  • FIG. 5 is a schematic diagram of an interface for starting shooting in movie mode according to an embodiment of the present application.
  • the first device when the first device receives the user's operation to start the camera application, the first device can display the interface shown in a in Figure 5, which can be the main interface of the camera application (or understood as the camera mode corresponding interface).
  • the interface may include one or more of the following, for example: a camera control corresponding to the camera function, a preview image, a control for enabling an artificial intelligence (artificial intelligence, AI) camera function, Controls for turning the flash on or off, settings controls for configuring the camera app, controls for adjusting the zoom factor, controls for flipping the camera, controls for opening the gallery, and more.
  • the interface shown in a in Figure 5 may also include multiple functional controls in the first-level menu of the camera application, for example: a control for turning on the night scene mode, a control for turning on the portrait mode, and a control for turning on the photo mode , the control for enabling video recording mode, and the control 501 for enabling movie mode, etc.
  • the control for opening the gallery can be used to open the gallery application.
  • the gallery application program is an application program for picture management on electronic devices such as smart phones and tablet computers, and may also be called "album".
  • the name of the application program is not limited in this embodiment.
  • the gallery application program can support the user to perform various operations on the videos stored on the first device, such as browsing, editing, deleting, selecting and other operations.
  • the camera application can be an application supported by the system of the first device, or the camera application can also be an application with a video recording function, etc.;
  • the movie mode can be a shooting mode for obtaining HDR video;
  • the operation of starting shooting It may be a voice operation, or may be a tap operation or a slide operation on a control for starting shooting in movie mode, or the like.
  • the first device when the first device receives an operation of the user triggering the control 501 for starting the movie mode, the first device may display the interface shown in b in FIG. 5 .
  • the interface shown in b in Figure 5 can be the interface corresponding to the movie mode, and the interface can include one or more of the following, for example: slow motion function control, 4K HDR function control 502, for turning on or off the flashlight controls, LUT function controls, setting controls for setting the camera application, and controls 503 for enabling shooting in movie mode, etc.
  • 5 may also include multiple function controls in the first-level menu of the camera application, for example: controls for enabling professional mode, controls for enabling more functions, etc., the interface
  • controls for enabling professional mode controls for enabling more functions, etc.
  • the interface shown in a in FIG. 5 please refer to the interface shown in a in FIG. 5 , which will not be repeated here.
  • the interface shown in b in FIG. 5 may include: 4K HDR function control 502 ( The prompt information 504 corresponding to the control within the range of the dotted line box), the prompt information 504 is used to indicate that when the user turns on the 4K HDR function control 502, the first device will record 4K HDR10+ video.
  • the 4K HDR function control 502 may be in a default off state, and when the first device receives an operation triggered by the user to trigger the 4K HDR function control 502, the first device may display as shown in FIG. 5
  • the interface shown in c the interface may include prompt information 505, and the prompt information 505 is used to indicate that the 4K HDR10+ mode has been turned on.
  • the prompt message 505 may disappear after being displayed for 2 seconds or other time.
  • the 4K HDR function control 502 may be in an on state
  • the first device receives the user's operation of triggering the control 503 for starting shooting in the interface shown in c in FIG. 5
  • the first device The image sequence may be acquired based on the camera, and the first HDR10+ video may be obtained based on processing the image sequence.
  • other content displayed in the interface shown in c in FIG. 5 is similar to the interface shown in b in FIG. 5 , and will not be repeated here.
  • an interface shown in d in FIG. 5 may be displayed.
  • the interface may include prompt information 506, and the prompt information 506 is used to indicate that the 4K HDR10+ mode is closed.
  • the prompt message 506 may disappear after being displayed for 2 seconds or other time.
  • 4K refers to the resolution of the screen, and the resolution of 4K is 4096 ⁇ 2160.
  • HDR refers to the rendering technology of the screen. Compared with ordinary images, the 4K HDR function can provide more dynamic range and image details, and can better reflect the visual effects in the real environment. This mode can make the video recorded by the electronic device 100 have a resolution of 4K, 30fps Rate.
  • the 4K HDR function is initially turned off. At this time, the 4K HDR function control 502 in the interface shown as b in FIG. 5 is provided with a slash indicating that the switch is turned off.
  • the first device When the first device detects the user's trigger operation on the 4K HDR function control 502, the first device turns on the 4K HDR function, and the 4K HDR function control 502 in the interface shown in c in Figure 5 is set to indicate the switch The closed slash disappears; further, when the first device detects that the user triggers the 4K HDR function control 502 in the interface shown as c in FIG. 5 , the first device closes the 4K HDR function, as shown in In the interface shown in d in FIG. 5 , the slash display for indicating that the switch is turned off is set by the 4K HDR function control 503 .
  • the resolution of the preview screen is lower than that of the preview screen when the 4K HDR function is enabled.
  • HDR10 video can be displayed in the preview screen.
  • users can flexibly control the 4K HDR controls in movie mode according to shooting needs, and then realize the recording of HDR10+ videos.
  • the first device can enable the user to enable movie HDR in the setting interface of the camera application, and then when the first device receives the user's operation of enabling recording, the first device can record HDR10+ video based on the 4K HDR function .
  • FIG. 6 is another schematic diagram of an interface for starting shooting in a movie mode provided by an embodiment of the present application.
  • the first device may display the interface shown in a in FIG. 6 , which may include: a setting control 601 and a control 602 for starting shooting in the movie mode wait.
  • a setting control 601 and a control 602 for starting shooting in the movie mode wait.
  • other content displayed in the interface is similar to the interface shown in b in FIG. 5 , and will not be repeated here.
  • the first device may display the interface shown in b in FIG. 6, which may be a setting interface of the camera application.
  • the interface shown in b in Figure 6 can include functional controls corresponding to taking photos, for example: photo-taking ratio function controls (such as supporting a 4:3 photo-taking ratio), voice-activated photo-taking function controls, gesture photo-taking function controls, smiling faces Capture function controls, etc.
  • the gesture camera function can only support the front, and the gesture is triggered by facing the mobile phone.
  • the smile capture function can automatically shoot when a smile is detected; the interface can also include function controls corresponding to the video function, for example: video resolution Rate function control, video frame rate function control, high-efficiency video format function control, movie HDR function control 603, and AI movie tone function control, wherein the high-efficiency video format function can save 35% of space, and users may not be able to play it on other devices
  • This format video the movie HDR function can use 10bit HDR to record video, the video will automatically switch to 4K
  • the AI movie tone function can intelligently identify the shooting content to match the LUT tone, and it is only supported in non-4K HDR.
  • the first device when the first device receives the user’s operation to enable the movie HDR function control 603, the first device may display the interface shown in c in FIG. Movie HDR function control 603 is on.
  • other content displayed in the interface shown in c in FIG. 6 is similar to the interface shown in b in FIG. 6 , and will not be repeated again.
  • the first device when the movie HDR function control 603 in the interface shown as c in FIG. 6 is turned on, when the first device receives that the user exits the setting interface and receives the When the operation of the control 602 for enabling shooting is triggered in the shown interface, the first device may acquire an image sequence based on the camera, and obtain a first HDR10+ video based on processing the image sequence. It can be understood that when the movie HDR function control is turned on, the HDR10 video can be displayed in the preview screen.
  • the user can flexibly control the movie HDR function control in the setting function according to the shooting needs, and then realize the recording of HDR10+ video.
  • the first device may display the movie mode corresponding model introduction.
  • FIG. 7 is a schematic diagram of an interface for viewing function details provided by an embodiment of the present application.
  • the interface shown in a in FIG. 7 may include: a control 701 for opening more functions, and other content displayed in the interface is similar to the interface shown in b in FIG. 5 , which is not repeated here. repeat.
  • the first device when the first device receives the user's operation on the control 701 for enabling more functions, the first device may display the interface shown in b in FIG. 7 .
  • the interface shown in b in Figure 7 can include: HDR function control, slow motion function control, micro movie function control, time-lapse photography function control, dynamic photo function control, download for downloading more functions control, an editing control for adjusting the position of each function in more controls, and a detail control 702 for viewing detailed information of each function in the camera application, and the like.
  • the first device when the first device receives the user's operation on the detail control 702 , the first device may display the interface shown in c in FIG. 7 .
  • the interface shown in c in Figure 7 can display detailed descriptions corresponding to each function in the camera application.
  • the interface can include: detailed descriptions corresponding to the HDR function.
  • the corresponding details of the slow motion function such as slow motion
  • super slow motion supports automatic or Manually shoot ultra-high-speed small videos, and the shooting effect is better in bright environments
  • detailed descriptions for the time-lapse photography function such as synthesizing long-term recorded images into short videos, and reproducing the process of scene changes in a short time
  • movies The details corresponding to the mode are described 703, such as recording 4K HDR10+ video, providing professional video solutions, etc.
  • the user can understand the function of each function in the camera application through the details page as shown in c in FIG. 7 , thereby improving the user's experience of using the camera application.
  • the first device may process the image sequence acquired based on the camera to obtain a preview image sequence corresponding to the preview stream.
  • the first device may also process the image sequence acquired based on the camera to obtain the HDR10+ video corresponding to the video stream.
  • the processing process may include image pre-processing and image post-processing.
  • FIG. 8 is a schematic flowchart of a video processing method provided by an embodiment of the present application.
  • the camera of the first device may include an image sensor for supporting the HDR function, for example, the image sensor may implement image output based on DCG mode and image output based on binning mode.
  • the output data supported by the DCG mode may include: a frame rate of 30fps, support for 12-bit data storage, and an output format of RAW12;
  • the output data supported by the binning mode may include: a frame rate of 30fps, Support 12bit data storage, and the output format is RAW12. It is understandable that binning only has data in the upper 10 bits, so binning needs to do supplementary processing on the lower two bits to ensure 12-bit data storage.
  • the image sensor may output image data based on binning, or may also output image data based on DCG.
  • binning can output an image sequence based on combining n (for example, n can be 4) pixels into one pixel;
  • DCG after combining n pixels into one pixel, can output image data based on HCG and based on LCG Image fusion of the output data, output image sequence.
  • the first device may perform image pre-processing 801 on the image sequence to obtain an image sequence after the image pre-processing 801 .
  • the image pre-processing 801 (or referred to as image signal processor front-end processing) is used to process the image in RAW format acquired based on the camera into an image in YUV (or understood as brightness and chrominance) format.
  • the image pre-processing 801 process may include one or more of the following: dead pixel correction processing, RAW domain noise reduction processing, black level correction processing, optical shadow correction processing, automatic white balance processing, color Interpolation processing, color correction processing, tone mapping processing, or image conversion processing, etc., are not limited to the image pre-processing 801 process in this embodiment of the present application.
  • the first device may use the image sequence after image pre-processing as a preview stream and a video stream.
  • the first device may perform gamma (Gamma) correction processing 802 and 3D LUT processing 803 on the pre-processed image sequence corresponding to the preview stream to obtain a preview image sequence; in the video recording stream, the first device may Perform Gamma correction processing 802 and 3D LUT processing 803 on the pre-processed image sequence corresponding to the preview stream to obtain a video image sequence.
  • the recorded image sequence may include the first image sequence and the second image sequence described in the embodiments of the present application.
  • the Gamma correction process 802 is used to adjust the brightness of the image, so that it can retain more details of bright and dark parts, compress contrast, and retain more color information.
  • the first device can apply the log curve to perform Gamma correction processing on the image sequence corresponding to the preview stream after image preprocessing and the image sequence after image preprocessing corresponding to the video stream, and then obtain the image sequence corresponding to the preview stream.
  • the 3D LUT processing 803 is used to map the color space in the image, so that the data through the 3D LUT can produce different color styles.
  • the first device respectively performs 3D LUT color mapping on the gamma-corrected image sequence corresponding to the preview stream and the gamma-corrected image sequence corresponding to the video stream, to obtain a preview image sequence corresponding to the preview stream, And the video image sequence corresponding to the video stream.
  • the images in the preview image sequence and the recorded image sequence may all be images satisfying the PQ curve of the BT.2020 color gamut.
  • the reference brightness that can be supported in the PQ curve is 1000 nit
  • the PQ curve can be stored in the first device as static metadata; the format of the static metadata can meet SMPTE ST 2086 or other custom formats, etc., this
  • the specific format of the static metadata is not specifically limited in the embodiments of the application.
  • the Gamma correction processing 802 and the 3D LUT processing 803 may be part of image post-processing (or called image processor back-end processing).
  • the image post-processing may further include: other processing steps such as anti-shake processing, noise processing, and image scaling processing, which are not limited in this embodiment of the present application.
  • the first device performs frame-by-frame statistical processing 804 on the images in the preview image sequence, determines the tone mapping curves corresponding to the multi-frame images in the video image sequence, and generates dynamic metadata, so that the first device can use the video Image sequences, along with dynamic metadata, are encoded as HDR10+ video.
  • the first device may encode an HDR10+ video based on the recorded image sequence and dynamic metadata when receiving the user's operation of ending video recording in movie mode.
  • FIG. 9 is a schematic diagram of an image sequence and brightness scene provided by an embodiment of the present application.
  • the brightness scene of multiple frames of images generated within 1 second (s) is schematically described by taking a frame rate of 30 fps as an example.
  • the first device can acquire an image 901 at about 33 milliseconds (ms), an image 902 at about 66 ms, an image 903 at about 99 ms, an image 904 at about 132 ms, ..., and an image at about 233 ms 905.
  • Acquire an image 906 at about 266ms acquire an image 907 at about 299ms, acquire an image 908 at about 332ms, and so on.
  • the user can obtain the image 901, image 902, image 903, image 904, ..., and image 905 outdoors; when the user moves from outdoor to indoor, the user can obtain image 906, image 907 and Image 908 et al.
  • the brightness scene of the image 901 and the image 902 may be the same, for example, the image 901 and the image 902 may both belong to the highlight scene;
  • the brightness scenes at can also be different, for example, the image 901 can belong to a high-brightness scene, and the image 902 can belong to a medium-brightness scene, etc.
  • the first image sequence described in the embodiment of the present application may be an image frame at a certain moment, for example, the first image sequence may be image 901, or the first image sequence may also be an image frame of a certain time period Collectively, for example, the first image sequence may include: image 901 , image 902 , image 903 , image 904 , . .
  • the second image sequence described in the embodiment of the present application may also be an image frame at a certain moment, for example, the second image sequence may be image 906, or the second image sequence may also be an image of a certain time period
  • a general term for frames, for example, the second image sequence may include: image 906 , image 907 , and image 908 .
  • the brightness scene corresponding to the first image sequence is different from the brightness scene corresponding to the second image sequence.
  • the tone mapping curve can adjust the brightness of the area in the image based on the reference brightness, so that it can protect the highlight area and the dark area in the image, for example, to enhance the dark area in the image and to adjust the brightness of the image. Suppress highlighted areas in .
  • the reference brightness of the tone mapping curve may be preset, for example, the preset reference brightness may be set to 400nit or other values.
  • the process for the first device to determine the tone mapping curves corresponding to the multi-frame images in the video image sequence may be: the first device may determine the brightness scenes corresponding to the multi-frame images in the video image sequence , and then based on the corresponding relationship between the brightness scene and the tone mapping curve, the tone mapping curve corresponding to the brightness scene is determined.
  • the brightness scene may include: a bright scene, a medium brightness scene, and a dark light scene, etc.
  • the brightness scene is not limited to the above three types, and may also be four types, five types, or six types, etc., the present invention
  • the names and numbers of the scenes included in the above-mentioned brightness scenes in the embodiments of the application are not limited.
  • the first device may determine, based on the grayscale histogram of the preview image, the average brightness value of the preview image, etc., the brightness scenes corresponding to the multiple frames of images in the recorded image sequence.
  • the first device may store grayscale histograms corresponding to typical brightness scenes, so the first device may separately count the grayscale histograms corresponding to multiple frames of images in the recorded image sequence, if the preview image
  • the first device may determine the brightness scene corresponding to the preview image.
  • the grayscale histogram is used to represent the brightness distribution of pixels in the preview image, and the brightness can be understood as the value corresponding to the Y channel (or Y component) when the image is in YUV format.
  • the first device can separately count the average brightness value of the pixels in the multi-frame images in the recorded image sequence, and if the average brightness value is greater than the brightness threshold corresponding to the brightness scene, the first device can determine that The brightness scene corresponding to the preview image.
  • the method for the first device to determine brightness scenes corresponding to multiple frames of images in the recorded image sequence may not be limited to the above two methods, which are not limited in this embodiment of the present application.
  • the first device may determine based on the corresponding relationship between the luminance scene and the tone mapping curve. Tone mapping curves corresponding to luminance scenes, generating dynamic metadata.
  • the first device may store the corresponding relationship between the brightness scene and the tone mapping curve, so the first device may match the corresponding relationship to obtain the tone mapping curve corresponding to the current brightness scene, and obtain dynamic metadata.
  • the first device may determine a corresponding tone mapping curve in real time according to a brightness scene, and generate dynamic metadata.
  • the dynamic metadata may include: a reference brightness value of the tone mapping curve, for example, 400 nit.
  • the tone mapping curve can be stored in the first device in the form of dynamic metadata, and the format of the dynamic metadata can be different according to the protocol.
  • the format of the dynamic metadata can meet SMPTE ST 2094 ( It supports application1, application2, application3, or application4), or other self-defined formats, etc., and the specific format of the dynamic metadata is not specifically limited in this embodiment of the application.
  • the dynamic metadata specified in SMPTE ST 2094-application4 may include one or more of the following: information about windows in an image (a window may be a rectangular area set in an image), size and position of a window, window The RGB value of the brightest pixel in the window, the largest average of R, G, and B pixels in the window, the percentage level of bright brightness in the window, the level of bright brightness in the window (percentile), the maximum brightness value in the scene
  • the preset brightness in the dynamic metadata is the same.
  • the preset brightness in the first dynamic metadata is the same.
  • the preset brightness in the first dynamic metadata includes the first dynamic
  • the first device uses the preview image sequence to display the HDR10 video, and uses the recorded image sequence and dynamic metadata to encode the first HDR10+ video.
  • the HDR10 video can be used for preview display of the first device, for example, the HDR10 video can be displayed on the display screen of the first device; the first HDR10+ video can be used for video recording of the first device, for example, the first The device can send the recorded image sequence and dynamic metadata to the video encoder according to the timestamp (or according to the identifier used to indicate that the recorded image sequence and dynamic metadata belong to a pair of data), and encode to obtain the first HDR10+ video, the first HDR10+ The video can be saved in the first device, and then the first HDR10+ video can also be displayed in the first device (or the second device) based on the user's playback operation.
  • the first device can match different dynamic metadata for scenes of different brightness corresponding to the multi-frame images acquired by the camera, and use the different dynamic metadata to adjust the multi-frame images respectively to obtain HDR10+ video.
  • the first device can save the HDR10+ video in the gallery application.
  • FIG. 10 is a schematic diagram of an interface for viewing an HDR10+ video provided in an embodiment of the present application.
  • the first device When the first device receives the user's operation of opening the gallery application, the first device may display the interface shown in a in FIG. and picture 1003. Wherein, a logo 1004 for indicating that the video 1001 is an HDR10+ video may be displayed around the video 1001, and the logo 1004 may be displayed as HDR.
  • the first device when the first device receives the user's operation to trigger the video 1001 , the first device may display the interface shown in b in FIG. 10 .
  • the interface shown in b in FIG. 10 may include: an identifier 1005 for indicating that the video 1001 is an HDR10+ video, a control for viewing more information about the video, a control for sharing the video, and a favorite Controls for videos, controls for editing videos, controls for deleting videos, controls for viewing more features, and more.
  • the first device when the first device receives the user's operation of sharing the HDR10+ video, the first device can share the HDR10+ video to the second device.
  • the user's operation to share the HDR10+ video can be: the user's sharing operation of the HDR10+ video through Bluetooth, or the user's sharing operation of the HDR10+ video through a network such as WLAN, or the user can also share the HDR10+ video through the device.
  • the HDR10+ video is shared to other devices, and this sharing operation is not specifically limited in this embodiment of the application.
  • the sharing operation may be "glory sharing", that is, a sharing method in which device scanning is performed via Bluetooth and data transmission is performed using WLAN.
  • FIG. 11 is a schematic diagram of a device sharing interface provided by an embodiment of the present application.
  • the first device When the first device receives the user's operation to share the HDR10+ video, for example, when receiving the user's operation on the control for sharing the video in the interface shown in b in Figure 10, the first device may display the In the interface shown in a, the interface may include prompt information 1101, and the prompt information 1101 may include a control 1102 for sharing the HDR10+ video to the second device. It can be understood that the prompt information 1101 may also include content such as controls for sharing the HDR10+ video to other applications.
  • a logo 1104 for indicating the HDR10+ video may be displayed around the HDR10+ video, for example, the logo may be HDR.
  • the first device when the first device receives the user's operation on the control 1102, the first device can share the HDR10+ video to the second device, and the second device can display The interface shown in b in Figure 11.
  • prompt information 1103 may be displayed on the interface of the second device, and the prompt information 1103 is used to indicate that the received HDR10+ video is generated based on dynamic metadata.
  • the prompt information 1103 may be Displayed as: The first device wants to share with you an HDR10+ video (1.65GB) containing dynamic metadata, whether to accept it, the prompt information may include: rejection control, and acceptance control, etc.
  • the interface shown in b in FIG. 11 may also include file management application controls, email application controls, music application controls, and computer application controls.
  • a logo 1105 for indicating the HDR10+ video may be displayed around the HDR10+ video, for example, the logo may be HDR.
  • the second device when the second device receives the user's operation on the receiving control, the second device can save the HDR10+ video.
  • the first device can share the HDR10+ video to the second device through device sharing, so that the second device can play the HDR10+ video on this device.
  • the second device when the second device receives the user's operation to play the HDR10+ video, the second device may display prompt information, and the prompt information is used to indicate that the second device will Video playback of HDR10+ videos based on dynamic metadata.
  • FIG. 12 is a schematic diagram of an interface for displaying prompt information provided by an embodiment of the present application.
  • the second device may display an interface as shown in Figure 12, which may include: prompt information 1201, a confirmation control 1202, and a control for ending playing the HDR10+ video , a control to open the gallery app, a control to see more features, and so on.
  • the prompt information 1201 is used to indicate the playing form of the current video, for example, the prompt information 1201 may display that the current device will perform HDR video playback based on dynamic metadata.
  • a logo 1203 for indicating the HDR10+ video may be displayed around the HDR10+ video in the interface shown in FIG. 12 , for example, the logo may be HDR.
  • the second device may also perform video playback based on static metadata, and at this time, the prompt information 1201 may not be displayed on the second device.
  • the second device when the second device receives the user's operation on the confirmation control 1202, the second device can analyze the HDR10+ video and play it.
  • the second device may analyze and play the HDR10+ video based on the video processing flow in the embodiment corresponding to FIG. 13 .
  • FIG. 13 is a schematic flowchart of playing an HDR10+ video provided by an embodiment of the present application.
  • the process of playing the HDR10+ video may include: video decoding processing 1301 .
  • the second device may decode the HDR10+ video into dynamic metadata and a third image sequence based on the video standard of SMPTE ST 2094-application4.
  • the second device when the second device receives the first HDR10+ video sent by the first device, the second device can determine the video standard of the first HDR10+ video, for example, the first HDR10+ video supports SMPTE ST2094-application4, or SMPTE ST 2086, etc.; further, when the second device supports the video standard of SMPTE ST2094-application4, the second device can obtain dynamic metadata and a third image sequence by decoding the first HDR10+ video (the third image sequence can be is an HDR still image); or, when the second device supports the SMPTE ST 2086 video standard, the second device can obtain the static metadata and the third image sequence by decoding the first HDR10+ video.
  • the process of playing the HDR video may include: tone mapping processing 1302 based on dynamic metadata, user interface (user interface, UI) tone mapping processing 1303, dynamic The HDR image and the HDR UI are superimposed processing 1304, and display screen-based tone mapping processing 1305 and other steps.
  • tone mapping processing 1302 based on dynamic metadata
  • user interface user interface
  • UI user interface
  • the HDR image and the HDR UI are superimposed processing 1304, and display screen-based tone mapping processing 1305 and other steps.
  • the second device may perform tone mapping on each frame of image in the third image sequence according to its corresponding dynamic metadata based on SMPTE ST2094-application4, to obtain a tone-mapped image sequence ; Further, the second device may also adjust the image brightness in the tone-mapped image sequence based on the peak brightness that the hardware of the second device can support, for example, it may be based on the benchmark brightness (for example, 400nit) of the dynamic metadata and the The hardware of the second device can support a proportional relationship between the peak luminances (for example, 500 nit), and adjust the brightness of the images in the image sequence after the tone mapping to obtain a dynamic HDR image sequence.
  • the peak luminances for example, 500 nit
  • the second device may adjust the tone of a standard dynamic range (standard dynamic range, SDR) UI icon based on a preset tone mapping rule to obtain an HDR UI.
  • SDR standard dynamic range
  • the second device may respectively superimpose each frame of the dynamic HDR image sequence with the HDR UI to obtain a mixed HDR image sequence.
  • the second device may process the images in the mixed HDR image sequence into an image sequence in a display color space based on display screen tone mapping, thereby obtaining an HDR10+ video.
  • the second device may use the third image sequence and the static metadata to generate an HDR10 video. It can be understood that the second device may also obtain the HDR10 video based on the static metadata and the third image sequence based on the embodiment corresponding to FIG. 13 , which will not be repeated here.
  • the second device can decode and play the HDR10+ video sent by the first device.
  • FIG. 14 is a schematic flowchart of another video processing method provided in the embodiments of the present application.
  • the video processing method may include the following steps:
  • the first device receives an operation of starting shooting in a movie mode.
  • the movie mode is a mode for recording high dynamic range HDR video;
  • the operation of starting shooting may be the operation of the control 503 for starting shooting in the embodiment corresponding to FIG. 5 , or the starting shooting The operation of can be the operation of the control 602 for starting shooting in the embodiment corresponding to FIG. 6 .
  • the first device acquires a first image sequence based on the camera.
  • the first image sequence corresponds to the first brightness scene
  • the method for determining the first brightness scene may refer to the description in the embodiment corresponding to FIG. 8 , which will not be repeated here.
  • the first device encodes the first HDR video based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene.
  • the first HDR video may be the first HDR10+ video described in the embodiment of the present application; the first dynamic metadata includes preset brightness.
  • the second device acquires the first HDR video from the first device.
  • the second device may acquire the first HDR video from the first device based on the embodiment corresponding to FIG. 11 .
  • the first HDR video can be compatible with dynamic metadata and static metadata, so that the second device that supports dynamic metadata can use dynamic metadata to play the content in the first HDR video, or support static metadata.
  • the second device for metadata may also use the static metadata to play the content in the first HDR video.
  • the second device adjusts the brightness of the first HDR video based on the preset brightness to obtain the second HDR video.
  • the second HDR video may be the second HDR10+ video described in the embodiment of this application.
  • the second device may adjust the image in the first HDR video based on the 400 nit, so that the first HDR video The brightness of images in an HDR video is maintained at a maximum of 400nit.
  • the second device when the dynamic metadata corresponding to the first HDR video indicates that the preset brightness of the first HDR video is 400nit, and the peak brightness of the second device is 700nit, the second device can The proportional relationship between 700nit and 700nit, the brightness of the image in the first HDR video is adaptively increased according to this proportional relationship, so that all 400nit images in the first HDR video can be displayed on the display screen of the 700nit second device.
  • the second device plays the second HDR video.
  • the second device may play the second HDR video based on the embodiment corresponding to FIG. 12 .
  • the first device can match the dynamic metadata for the first image sequence acquired based on the camera and the brightness scene corresponding to the first image sequence, and use the dynamic metadata to adjust the first image sequence to obtain the first HDR video , and send the first HDR video to the second device, so that the second device can perform brightness mapping on the first HDR video based on the preset brightness indicated in the dynamic metadata, and display video content with appropriate brightness.
  • the second device adjusts the brightness of the first HDR video based on the preset brightness to obtain the second HDR video.
  • S1406 includes: the second device determines a brightness ratio; the brightness ratio is the peak brightness of the second device The ratio between the brightness and the preset brightness; the second device adjusts the brightness of the first HDR video based on the brightness ratio to obtain the second HDR video.
  • the first device continues to acquire a second image sequence based on the camera; wherein, the second image sequence corresponds to a second brightness scene; the first brightness scene is different from the second brightness scene; the first device Based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene, the first HDR video is encoded, including: the first device based on the first image sequence, the second image sequence, and the first dynamic corresponding to the first brightness scene The metadata and the second dynamic metadata corresponding to the second brightness scene are encoded to obtain the first HDR video.
  • the method further includes: the first device performs image pre-processing on the first image sequence to obtain the first image sequence after image pre-processing; the first device performs image pre-processing on the first image sequence An image sequence is subjected to gamma correction processing to obtain a first image sequence after gamma correction processing; the first device performs 3D lookup table processing on the first image sequence after gamma correction processing to obtain a first image sequence after 3D lookup table processing An image sequence; wherein, the first image sequence processed by the 3D lookup table includes first static metadata corresponding to the first image sequence; the first device based on the first image sequence and the first dynamic metadata corresponding to the first brightness scene Data encoding to obtain the first HDR video includes: the first device encodes the first image sequence processed based on the 3D lookup table and the first dynamic metadata corresponding to the first brightness scene to obtain the first HDR video.
  • the first HDR video includes first static metadata and first dynamic metadata.
  • the method further includes: when the second device determines that it supports processing the first static metadata, the second device decodes the second HDR video into the first image sequence, and the first static metadata ; The second device encodes a third HDR video based on the first image sequence and the first static metadata; the second HDR video is different from the third HDR video.
  • the type of the first HDR video is HDR10+ video
  • the type of the second HDR video is HDR10+ video
  • the type of the third HDR video is HDR10 video.
  • S1401 includes: the first device receives an operation for turning on the movie mode; in response to the operation for turning on the movie mode, the first device displays a first interface; the first interface includes: A control for HDR video, and a control for enabling shooting; when the state of the control for recording the HDR video is off, the first device receives an operation for enabling the control for recording the HDR video; in response to the user For the operation of the control to record the HDR video, the first device displays the second interface; the second interface includes: a prompt message indicating that the 4K HDR10+ mode has been turned on; the state of the control used to record the HDR video is turned on , the first device receives an operation on a control for starting shooting.
  • the operation for opening the movie mode may be the operation of the interface shown in a in 5 for the control 501 for opening the movie mode;
  • the first interface may be the interface shown in b in FIG. 5 for recording
  • the control for obtaining the HDR video can be the 4K HDR function control 502 shown in b in Figure 5, and the control for starting shooting can be the control 503 for starting shooting shown in b in Figure 5;
  • the second interface can be as shown in Figure 5
  • the prompt information used to indicate that the 4K HDR10+ mode has been turned on may be the prompt information 505 shown in c in FIG. 5 .
  • the method further includes: when the state of the control for recording the HDR video is on, the first device receives an operation for turning off the control for recording the HDR video; Due to the operation of the controls for recording the HDR video, the first device displays a third interface; the third interface includes: a prompt message indicating that the 4K HDR10+ mode is turned off.
  • the third interface may be the interface shown in d in FIG. 5
  • the prompt information for indicating that the 4K HDR10+ mode is closed may be the prompt information 506 shown in d in FIG. 5 .
  • the method further includes: the first device receives an operation of turning on the movie mode for the first time; in response to the first operation of turning on the movie mode, the first device displays a fourth interface; in the fourth interface Including: controls for recording HDR video, and a prompt message indicating that 4K HDR10+ video will be recorded after turning on the control for recording HDR video.
  • the fourth interface may be the interface shown in b in FIG. 5
  • the control for recording the HDR video may be the prompt information 504 shown in b in FIG. 5 .
  • the first device receiving the operation of starting shooting in the movie mode includes: the first device receives the operation for turning on the movie mode; in response to the operation of turning on the movie mode, the first device displays the fifth interface; the fifth interface includes: a control for viewing the setting items corresponding to the first application, and a control for starting shooting; the first device receives an operation on the control for viewing the setting items corresponding to the first application; the response For the operation of checking the control of the setting item corresponding to the first application, the first device displays the sixth interface; the sixth interface includes: the first control for recording video with 10-bit HDR in movie mode and switching the video to 4K ; When the state of the first control is on, the first device receives an operation on the control for enabling shooting.
  • the fifth interface may be the interface shown in a in FIG. 6, and the control for viewing the setting items corresponding to the first application may be the setting control 601 shown in a in FIG.
  • the control can be the control 602 for starting shooting shown in a in FIG. 6;
  • the sixth interface can be the interface shown in b in FIG. 6, and the first control can be the movie HDR function control 603 shown in b in FIG. 6 .
  • the method further includes: the first device receives an operation on the control for viewing the function details in the first application; in response to the operation on the control for viewing the function details in the first application, the first The device displays a seventh interface; wherein, the seventh interface includes: the function details corresponding to the movie mode, and the function details of the movie mode are used to indicate that the movie mode can record 4K HDR10+ video.
  • the seventh interface may be the interface shown in c in FIG. 7
  • the function details corresponding to the movie mode may be the detailed description 703 corresponding to the movie mode shown in c in FIG. 7 .
  • the method further includes: the first device receives an operation for opening the second application; in response to the operation of opening the second application, the first device displays an eighth interface; wherein, the eighth interface includes : the first HDR video, and the identification corresponding to the first HDR video; the identification is used to indicate the type of the first HDR video; the first device receives the operation for the first HDR video; in response to the operation for the first HDR video, the first The device displays a ninth interface; the ninth interface includes: an identification.
  • the second application may be the gallery application in the embodiment of the present application
  • the eighth interface may be the interface shown in a in FIG. 10
  • the identifier may be the identifier 1004 shown in a in FIG. 10
  • the ninth interface It may be the interface shown in b in FIG. 10
  • the logo may be the logo 1005 shown in b in FIG. 10 .
  • the method further includes: the second device displays a tenth interface; wherein, the tenth interface includes: The HDR video is the prompt information of the HDR10+ video containing dynamic metadata, the control for allowing to receive the first HDR video, and the control for refusing to receive the first HDR video; the second device receives the control for allowing to receive the first HDR video operation of the control; in response to the operation of the control allowing to receive the first HDR video, the second device displays an eleventh interface; wherein, the eleventh interface includes an indication for playing the first HDR video based on dynamic metadata Prompt information.
  • the tenth interface may be the interface shown in b in FIG. 11
  • the prompt information indicating that the first HDR video is an HDR10+ video containing dynamic metadata may be the prompt information 1103 shown in b in FIG. 11
  • the eleventh interface may be the interface shown in FIG. 12
  • the prompt information for instructing to play the first HDR video based on the dynamic metadata may be the prompt information 1201 in FIG. 12 .
  • Figure 15 is a schematic structural diagram of a video processing device provided by the embodiment of the present application.
  • the video processing device may be the terminal device in the embodiment of the present application, or it may be a chip or a chip system in the terminal device .
  • the video processing device may be a device in the first device, or may also be a device in the second device.
  • a video processing apparatus 150 may be used in a communication device, a circuit, a hardware component or a chip, and the video processing apparatus includes: a display unit 1501 , a processing unit 1502 , a communication unit 1503 and the like.
  • the display unit 1501 is used to support the steps of display performed by the video processing method;
  • the processing unit 1502 is used to support the steps of information processing performed by the video processing device.
  • the processing unit 1502 and the display unit 1501 may be integrated, and the processing unit 1502 and the display unit 1501 may communicate.
  • the video processing apparatus may further include: a storage unit 1504 .
  • the storage unit 1504 may include one or more memories, and the memories may be devices used to store programs or data in one or more devices and circuits.
  • the storage unit 1504 may exist independently, and is connected to the processing unit 1502 through a communication bus.
  • the storage unit 1504 can also be integrated with the processing unit 1502 .
  • the storage unit 1504 can store the computer-executed instructions of the method of the terminal device, so that the processing unit 1502 executes the method of the terminal device in the above embodiment .
  • the storage unit 1504 may be a register, a cache, or a random access memory (random access memory, RAM), etc., and the storage unit 1504 may be integrated with the processing unit 1502.
  • the storage unit 1504 may be a read-only memory (read-only memory, ROM) or other types of static storage devices that can store static information and instructions, and the storage unit 1504 may be independent from the processing unit 1502.
  • the video processing apparatus may further include: a communication unit 1503 .
  • the communication unit 1503 is used to support the video processing apparatus to interact with other devices.
  • the communication unit 1503 may be a communication interface or an interface circuit.
  • the communication unit 1503 may be a communication interface.
  • the communication interface may be an input/output interface, a pin, or a circuit.
  • the device in this embodiment can be correspondingly used to perform the steps performed in the above method embodiments, and its implementation principles and technical effects are similar, and will not be repeated here.
  • FIG. 16 is a schematic diagram of the hardware structure of another terminal device provided in the embodiment of the present application. As shown in FIG. interface 1603 as an example for illustration).
  • the processor 1601 can be a general-purpose central processing unit (central processing unit, CPU), a microprocessor, a specific application integrated circuit (application-specific integrated circuit, ASIC), or one or more for controlling the execution of the application program program integrated circuit.
  • CPU central processing unit
  • ASIC application-specific integrated circuit
  • Communication lines 1604 may include circuitry that communicates information between the components described above.
  • the communication interface 1603 uses any device such as a transceiver for communicating with other devices or communication networks, such as Ethernet, wireless local area networks (wireless local area networks, WLAN) and so on.
  • a transceiver for communicating with other devices or communication networks, such as Ethernet, wireless local area networks (wireless local area networks, WLAN) and so on.
  • the terminal device may also include a memory 1602.
  • the memory 1602 may be a read-only memory (ROM) or other types of static storage devices that can store static information and instructions, a random access memory (random access memory, RAM) or other types that can store information and instructions It can also be an electrically erasable programmable read-only memory (EEPROM), compact disc read-only memory (CD-ROM) or other optical disc storage, optical disc storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or can be used to carry or store desired program code in the form of instructions or data structures and can be programmed by a computer Any other medium accessed, but not limited to.
  • the memory may exist independently and be connected to the processor through the communication line 1604 . Memory can also be integrated with the processor.
  • the memory 1602 is used to store computer-executed instructions for implementing the solution of the present application, and the execution is controlled by the processor 1601 .
  • the processor 1601 is configured to execute computer-executed instructions stored in the memory 1602, so as to implement the method provided in the embodiment of the present application.
  • the computer-executed instructions in the embodiments of the present application may also be referred to as application program codes, which is not specifically limited in the embodiments of the present application.
  • the processor 1601 may include one or more CPUs, for example, CPU0 and CPU1 in FIG. 16 .
  • a terminal device may include multiple processors, for example, processor 1601 and processor 1605 in FIG. 16 .
  • processors may be a single-core (single-CPU) processor or a multi-core (multi-CPU) processor.
  • a processor herein may refer to one or more devices, circuits, and/or processing cores for processing data (eg, computer program instructions).
  • FIG. 17 is a schematic structural diagram of a chip provided by an embodiment of the present application.
  • the chip 170 includes one or more than two (including two) processors 1720 and a communication interface 1730 .
  • memory 1740 stores the following elements: executable modules or data structures, or subsets thereof, or extensions thereof.
  • the memory 1740 may include a read-only memory and a random access memory, and provides instructions and data to the processor 1720 .
  • a part of the memory 1740 may also include a non-volatile random access memory (non-volatile random access memory, NVRAM).
  • the memory 1740 , the communication interface 1730 and the processor 1720 are coupled together through the bus system 1710 .
  • the bus system 1710 may include not only a data bus, but also a power bus, a control bus, and a status signal bus.
  • the various buses are labeled bus system 1710 in FIG. 17 .
  • the methods described in the foregoing embodiments of the present application may be applied to the processor 1720 or implemented by the processor 1720 .
  • the processor 1720 may be an integrated circuit chip with signal processing capability.
  • each step of the above method may be implemented by an integrated logic circuit of hardware in the processor 1720 or instructions in the form of software.
  • the above-mentioned processor 1720 may be a general-purpose processor (for example, a microprocessor or a conventional processor), a digital signal processor (digital signal processing, DSP), an application specific integrated circuit (application specific integrated circuit, ASIC), an off-the-shelf programmable gate Array (field-programmable gate array, FPGA) or other programmable logic devices, discrete gates, transistor logic devices or discrete hardware components, the processor 1720 can implement or execute the methods, steps and logic block diagrams disclosed in the embodiments of the present invention .
  • DSP digital signal processing
  • ASIC application specific integrated circuit
  • FPGA field-programmable gate array
  • the steps of the method disclosed in connection with the embodiments of the present application may be directly implemented by a hardware decoding processor, or implemented by a combination of hardware and software modules in the decoding processor.
  • the software module may be located in a mature storage medium in the field such as random access memory, read-only memory, programmable read-only memory, or electrically erasable programmable read only memory (EEPROM).
  • the storage medium is located in the memory 1740, and the processor 1720 reads the information in the memory 1740, and completes the steps of the above method in combination with its hardware.
  • the instructions stored in the memory for execution by the processor may be implemented in the form of computer program products.
  • the computer program product may be written in the memory in advance, or may be downloaded and installed in the memory in the form of software.
  • a computer program product includes one or more computer instructions. When the computer program instructions are loaded and executed on the computer, the processes or functions according to the embodiments of the present application will be generated in whole or in part.
  • the computer can be a general purpose computer, special purpose computer, computer network, or other programmable apparatus.
  • Computer instructions may be stored in or transmitted from one computer-readable storage medium to another computer-readable storage medium, e.g. Coaxial cable, optical fiber, digital subscriber line (digital subscriber line, DSL) or wireless (such as infrared, wireless, microwave, etc.) transmission to another website site, computer, server or data center.
  • Computer readable storage medium can be Any available media capable of being stored by a computer or a data storage device such as a server, data center, etc. integrated with one or more available media.
  • available media may include magnetic media (e.g., floppy disks, hard disks, or tapes), optical media (e.g., A digital versatile disc (digital versatile disc, DVD)), or a semiconductor medium (for example, a solid state disk (solid state disk, SSD)), etc.
  • magnetic media e.g., floppy disks, hard disks, or tapes
  • optical media e.g., A digital versatile disc (digital versatile disc, DVD)
  • a semiconductor medium for example, a solid state disk (solid state disk, SSD)
  • Computer-readable media may include computer storage media and communication media, and may include any medium that can transfer a computer program from one place to another.
  • a storage media may be any target media that can be accessed by a computer.
  • the computer-readable medium may include compact disc read-only memory (compact disc read-only memory, CD-ROM), RAM, ROM, EEPROM or other optical disc storage; the computer-readable medium may include a magnetic disk memory or other disk storage devices.
  • any connected cord is properly termed a computer-readable medium.
  • Disk and disc includes compact disc (CD), laser disc, compact disc, digital versatile disc (DVD), floppy disk and Blu-ray disc where disks usually reproduce data magnetically, while discs reproduce data optically with lasers. Reproduce data.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Human Computer Interaction (AREA)
  • Two-Way Televisions, Distribution Of Moving Picture Or The Like (AREA)
  • Studio Devices (AREA)

Abstract

本申请实施例提供一种视频处理方法和装置,方法包括:第一设备接收在电影模式中开启拍摄的操作;响应于开启拍摄的操作,第一设备基于摄像头获取第一图像序列;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频;第二设备获取来自于第一设备的第一HDR视频;第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频;第二设备播放第二HDR视频。这样,第一设备可以为亮度场景匹配动态元数据,编码得到HDR视频,将HDR视频发送至第二设备,使得第二设备可以基于预设亮度,对HDR视频进行亮度映射,并显示亮度合适的视频内容。

Description

视频处理方法和装置
本申请要求于2022年02月28日提交中国专利局、申请号为202210193750.X、申请名称为“视频处理方法和装置”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及终端技术领域,尤其涉及一种视频处理方法和装置。
背景技术
随着互联网的普及和发展,人们对于终端设备的功能需求也越发多样化,例如用户可以利用终端设备的相机应用中的电影模式,拍摄得到高动态范围(High Dynamic Range,HDR)视频。
通常情况下,终端设备可以通过对基于摄像头获取的多帧图像进行处理得到HDR视频,该HDR视频可以为按照静态元数据进行配置的,例如该HDR的转换曲线-感知量化(perceptual quantization,PQ)曲线是按照绝对亮度进行固定映射的,如该绝对亮度可以为终端设备的显示器的基准显示亮度,如1000尼特(nit)。
然而,当该HDR视频在峰值亮度达不到1000nit的设备上显示时,则会造成高亮信息的丢失,对HDR视频的显示效果造成影响。
发明内容
本申请实施例提供一种视频处理方法和装置,使得第一设备可以为基于摄像头获取的多帧图像分别对应的不同亮度场景,匹配不同的动态元数据,利用不同的动态元数据对该多帧图像分别进行调整,得到HDR视频,并将HDR视频发送至第二设备,使得第二设备可以基于HDR视频的预设亮度,对HDR视频进行亮度映射,并显示亮度合适的视频内容。
第一方面,本申请实施例提供一种视频处理方法,应用于视频处理系统,视频处理系统中包括:第一设备以及第二设备,方法包括:第一设备接收在电影模式中开启拍摄的操作;电影模式为用于录制得到高动态范围HDR视频的模式;响应于开启拍摄的操作,第一设备基于摄像头获取第一图像序列;第一图像序列对应第一亮度场景;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频;第一动态元数据中包括预设亮度;第二设备获取来自于第一设备的第一HDR视频;第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频;第二设备播放第二HDR视频。这样,第一设备可以为基于摄像头获取的多帧图像分别对应的不同亮度场景,匹配不同的动态元数据,利用不同的动态元数据对该多帧图像分别进行调整,得到HDR视频,并将HDR视频发送至第二设备,使得第二设备可以基于HDR视频的预设亮度,对HDR视频进行亮度映射,并显示亮度合适的视频内容。
在一种可能的实现方式中,第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频,包括:第二设备确定亮度比例;亮度比例为第二设备的峰值亮度 与预设亮度之间的比例;第二设备基于亮度比例对第一HDR视频进行亮度调节,得到第二HDR视频。这样,使得第二设备还可以根据第二设备的硬件可以支持的峰值亮度对第一HDR视频的亮度进行调节,使得调节后的第二HDR视频的播放效果更好。
在一种可能的实现方式中,方法还包括:第一设备基于摄像头继续获取第二图像序列;其中,第二图像序列对应第二亮度场景;第一亮度场景与第二亮度场景不同;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于第一图像序列、第二图像序列、第一亮度场景对应的第一动态元数据、以及第二亮度场景对应的第二动态元数据,编码得到第一HDR视频。这样,使得第一设备可以为不同的亮度场景匹配相应的动态元数据,并基于不同的动态元数据编码得到HDR视频。
在一种可能的实现方式中,第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频之前,方法还包括:第一设备对第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;第一设备对图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;第一设备对伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,3D查找表处理后的第一图像序列中包括第一图像序列对应的第一静态元数据;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于3D查找表处理后的第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。这样,使得第一设备可以基于对第一图像序列的图像前处理以及图像后处理,得到画面效果较好的HDR视频。
在一种可能的实现方式中,第一HDR视频中包括:第一静态元数据以及第一动态元数据。
在一种可能的实现方式中,方法还包括:当第二设备确定支持对第一静态元数据进行处理时,第二设备将第二HDR视频解码为第一图像序列、以及第一静态元数据;第二设备基于第一图像序列、以及第一静态元数据编码得到第三HDR视频;第二HDR视频与第三HDR视频不同。这样,使得第二设备可以兼容动态元数据以及静态元数据,并且不支持对动态元数据进行处理的第二设备可以基于静态元数据生成HDR视频。
在一种可能的实现方式中,第一HDR视频的类型为HDR10+视频,第二HDR视频的类型为HDR10+视频,第三HDR视频的类型为HDR10视频。
在一种可能的实现方式中,第一设备接收在电影模式中开启拍摄的操作,包括:第一设备接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备显示第一界面;第一界面中包括:用于录制得到HDR视频的控件、以及用于开启拍摄的控件;在用于录制得到HDR视频的控件的状态为关闭状态时,第一设备接收用于开启用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备显示第二界面;第二界面中包括:用于指示4K HDR10+模式已开启的提示信息;在用于录制得到HDR视频的控件的状态为开启状态时,第一设备接收针对用于开启拍摄的控件的操作。这样,使得第一设备可以基于用户对于用于录制得到HDR视频的控件的灵活操作,确定是否需要拍摄得到4K HDR视频。
在一种可能的实现方式中,方法还包括:在用于录制得到HDR视频的控件的状态 为开启状态时,第一设备接收用于关闭用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备显示第三界面;第三界面中包括:用于指示4K HDR10+模式已关闭的提示信息。这样,使得第一设备可以根据提示信息,确定当前是否开启4K HDR10+模式,进而提高用户对于视频录制功能的使用体验。
在一种可能的实现方式中,方法还包括:第一设备接收到第一次打开电影模式的操作;响应于第一次打开电影模式的操作,第一设备显示第四界面;第四界面中包括:用于录制得到HDR视频的控件、以及用于指示开启用于录制得到HDR视频的控件后将录制4K HDR10+视频的提示信息。这样,在用户第一打开电影模式时,用户就可以基于提示信息的指引,确定如何开启4K HDR10+模式,进而提高用户对于视频录制功能的使用体验。
在一种可能的实现方式中,第一设备接收在电影模式中开启拍摄的操作,包括:第一设备接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备显示第五界面;第五界面中包括:用于查看第一应用对应的设置项的控件、以及用于开启拍摄的控件;第一设备接收针对用于查看第一应用对应的设置项的控件的操作;响应于查看第一应用对应的设置项的控件的操作,第一设备显示第六界面;第六界面中包括:用于在电影模式中采用10比特HDR记录视频并将视频切换至4K的第一控件;在第一控件的状态为开启状态下,第一设备接收针对用于开启拍摄的控件的操作。这样,使得用户可以根据拍摄需求对设置功能中的电影HDR功能控件进行灵活控制,进而实现对于HDR10+视频的录制。其中,第一应用可以为相机应用。
在一种可能的实现方式中,方法还包括:第一设备接收针对用于查看第一应用中的功能详情的控件的操作;响应于查看第一应用中的功能详情的控件的操作,第一设备显示第七界面;其中,第七界面中包括:电影模式对应的功能详情,电影模式的功能详情用于指示电影模式可录制4K HDR10+视频。这样,使得用户可以根据电影模式对应的功能详情,了解对相机应用中的各种功能,进而提高用户使用相机应用的使用体验。
在一种可能的实现方式中,方法还包括:第一设备接收用于打开第二应用的操作;响应于打开第二应用的操作,第一设备显示第八界面;其中,第八界面中包括:第一HDR视频、以及第一HDR视频对应的标识;标识用于指示第一HDR视频的类型;第一设备接收针对第一HDR视频的操作;响应于针对第一HDR视频的操作,第一设备显示第九界面;第九界面中包括:标识。这样,使得用户可以根据标识准确的在图库应用程序中找到HDR10+视频,增加用户查看HDR10+视频的便捷性。其中,第二应用可以为图库应用程序。
在一种可能的实现方式中,第二设备获取来自于第一设备的第一HDR视频之后,方法还包括:第二设备显示第十界面;其中,第十界面中包括:用于指示第一HDR视频为包含动态元数据的HDR10+视频的提示信息、用于允许接收第一HDR视频的控件、以及用于拒绝接收第一HDR视频的控件;第二设备接收针对用于允许接收第一HDR视频的控件的操作;响应于允许接收第一HDR视频的控件的操作,第二设备显示第十一界面;其中,第十一界面中包括用于指示基于动态元数据对第一HDR视频进行播放的提示信息。这样,第二设备可以实现对于第一设备发送的HDR10+视频的解码 以及播放。
第二方面,本申请实施例提供一种视频处理方法,应用于第一设备,方法包括:第一设备接收在电影模式中开启拍摄的操作;电影模式为用于录制得到高动态范围HDR视频的模式;响应于开启拍摄的操作,第一设备基于摄像头获取第一图像序列;第一图像序列对应第一亮度场景;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频;第一动态元数据中包括预设亮度;第一设备将第一HDR视频发送至第二设备。这样,第一设备可以为基于摄像头获取的多帧图像分别对应的不同亮度场景,匹配不同的动态元数据,利用不同的动态元数据对该多帧图像分别进行调整,得到HDR视频。
在一种可能的实现方式中,方法还包括:第一设备基于摄像头继续获取第二图像序列;其中,第二图像序列对应第二亮度场景;第一亮度场景与第二亮度场景不同;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于第一图像序列、第二图像序列、第一亮度场景对应的第一动态元数据、以及第二亮度场景对应的第二动态元数据,编码得到第一HDR视频。这样,使得第一设备可以为不同的亮度场景匹配相应的动态元数据,并基于不同的动态元数据编码得到HDR视频。
在一种可能的实现方式中,第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频之前,方法还包括:第一设备对第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;第一设备对图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;第一设备对伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,3D查找表处理后的第一图像序列中包括第一图像序列对应的第一静态元数据;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于3D查找表处理后的第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。这样,使得第一设备可以基于对第一图像序列的图像前处理以及图像后处理,得到画面效果较好的HDR视频。
在一种可能的实现方式中,第一HDR视频中包括第一静态元数据以及第一动态元数据。
第三方面,本申请实施例提供一种视频处理方法,应用于第二设备,方法包括:第二设备获取来自于第一设备的第一HDR视频;其中,第一HDR视频中包括第一动态元数据以及第一图像序列;第一动态元数据中包括预设亮度;第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频;第二设备播放第二HDR视频。这样,使得第二设备可以接收来自第一设备的HDR视频,并基于HDR视频的预设亮度,对HDR视频进行亮度映射,进而显示亮度合适的视频内容。
在一种可能的实现方式中,第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频,包括:第二设备确定亮度比例;亮度比例为第二设备的峰值亮度与预设亮度之间的比例;第二设备基于亮度比例对第一HDR视频进行亮度调节,得到第二HDR视频。这样,使得第二设备还可以根据第二设备的硬件可以支持的峰值亮度对第一HDR视频的亮度进行调节,使得调节后的第二HDR视频的播放效果更好。
在一种可能的实现方式中,第一HDR视频中包括第一静态元数据以及第一动态元数据。
在一种可能的实现方式中,方法还包括:当第二设备确定支持对第一静态元数据进行处理时,第二设备将第二HDR视频解码为第一图像序列、以及第一静态元数据;第二设备基于第一图像序列、以及第一静态元数据编码得到第三HDR视频;第二HDR视频与第三HDR视频不同。这样,使得第二设备还可以根据第二设备的硬件可以支持的峰值亮度对第一HDR视频的亮度进行调节,使得调节后的第二HDR视频的播放效果更好。
在一种可能的实现方式中,第一HDR视频的类型为HDR10+视频,第二HDR视频的类型为HDR10+视频,第三HDR视频的类型为HDR10视频。
第四方面,本申请实施例提供一种视频处理装置,方法包括:第一设备的处理单元,用于接收在电影模式中开启拍摄的操作;电影模式为用于录制得到高动态范围HDR视频的模式;响应于开启拍摄的操作,第一设备的处理单元,用于基于摄像头获取第一图像序列;第一图像序列对应第一亮度场景;第一设备的处理单元,用于基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频;第一动态元数据中包括预设亮度;第二设备的通信单元,用于获取来自于第一设备的第一HDR视频;第一设备的处理单元,用于基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频;第二设备的处理单元,用于播放第二HDR视频。
在一种可能的实现方式中,第二设备的处理单元,具体用于确定亮度比例;亮度比例为第二设备的峰值亮度与预设亮度之间的比例;第二设备的处理单元,具体用于基于亮度比例对第一HDR视频进行亮度调节,得到第二HDR视频。
在一种可能的实现方式中,第一设备的处理单元,还用于基于摄像头继续获取第二图像序列;其中,第二图像序列对应第二亮度场景;第一亮度场景与第二亮度场景不同;第一设备的处理单元,还用于基于第一图像序列、第二图像序列、第一亮度场景对应的第一动态元数据、以及第二亮度场景对应的第二动态元数据,编码得到第一HDR视频。
在一种可能的实现方式中,第一设备的处理单元,还用于:对第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;对图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;对伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,3D查找表处理后的第一图像序列中包括第一图像序列对应的第一静态元数据;基于3D查找表处理后的第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。
在一种可能的实现方式中,第一HDR视频中包括:第一静态元数据以及第一动态元数据。
在一种可能的实现方式中,当第二设备确定支持对第一静态元数据进行处理时,第二设备的处理单元,还用于将第二HDR视频解码为第一图像序列、以及第一静态元数据;第二设备的处理单元,还用于基于第一图像序列、以及第一静态元数据编码得到第三HDR视频;第二HDR视频与第三HDR视频不同。
在一种可能的实现方式中,第一HDR视频的类型为HDR10+视频,第二HDR视频的类型为HDR10+视频,第三HDR视频的类型为HDR10视频。
在一种可能的实现方式中,第一设备的处理单元,具体用于接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备的显示单元,具体用于显示第一界面;第一界面中包括:用于录制得到HDR视频的控件、以及用于开启拍摄的控件;在用于录制得到HDR视频的控件的状态为关闭状态时,第一设备的处理单元,还具体用于接收用于开启用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备的显示单元,还具体用于显示第二界面;第二界面中包括:用于指示4K HDR10+模式已开启的提示信息;在用于录制得到HDR视频的控件的状态为开启状态时,第一设备的处理单元,具体用于接收针对用于开启拍摄的控件的操作。
在一种可能的实现方式中,在用于录制得到HDR视频的控件的状态为开启状态时,第一设备的处理单元,还用于接收用于关闭用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备的显示单元,还用于显示第三界面;第三界面中包括:用于指示4K HDR10+模式已关闭的提示信息。
在一种可能的实现方式中,第一设备的处理单元,还用于接收到第一次打开电影模式的操作;响应于第一次打开电影模式的操作,第一设备的显示单元,还用于显示第四界面;第四界面中包括:用于录制得到HDR视频的控件、以及用于指示开启用于录制得到HDR视频的控件后将录制4K HDR10+视频的提示信息。
在一种可能的实现方式中,第一设备的处理单元,具体用于接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备的显示单元,具体用于显示第五界面;第五界面中包括:用于查看第一应用对应的设置项的控件、以及用于开启拍摄的控件;第一设备的处理单元,还具体用于接收针对用于查看第一应用对应的设置项的控件的操作;响应于查看第一应用对应的设置项的控件的操作,第一设备的显示单元,还具体用于显示第六界面;第六界面中包括:用于在电影模式中采用10比特HDR记录视频并将视频切换至4K的第一控件;在第一控件的状态为开启状态下,第一设备的处理单元,还具体用于接收针对用于开启拍摄的控件的操作。
在一种可能的实现方式中,第一设备的处理单元,还用于接收针对用于查看第一应用中的功能详情的控件的操作;响应于查看第一应用中的功能详情的控件的操作,第一设备的显示单元,还用于显示第七界面;其中,第七界面中包括:电影模式对应的功能详情,电影模式的功能详情用于指示电影模式可录制4K HDR10+视频。
在一种可能的实现方式中,第一设备的处理单元,还用于接收用于打开第二应用的操作;响应于打开第二应用的操作,第一设备的显示单元,还用于显示第八界面;其中,第八界面中包括:第一HDR视频、以及第一HDR视频对应的标识;标识用于指示第一HDR视频的类型;第一设备的处理单元,还用于接收针对第一HDR视频的操作;响应于针对第一HDR视频的操作,第一设备的显示单元,还用于显示第九界面;第九界面中包括:标识。
在一种可能的实现方式中,第二设备的显示单元,还用于显示第十界面;其中,第十界面中包括:用于指示第一HDR视频为包含动态元数据的HDR10+视频的提示信 息、用于允许接收第一HDR视频的控件、以及用于拒绝接收第一HDR视频的控件;第二设备的处理单元,还用于接收针对用于允许接收第一HDR视频的控件的操作;响应于允许接收第一HDR视频的控件的操作,第二设备的处理单元,还用于显示第十一界面;其中,第十一界面中包括用于指示基于动态元数据对第一HDR视频进行播放的提示信息。
第五方面,本申请实施例提供一种视频处理装置,处理单元,用于接收在电影模式中开启拍摄的操作;电影模式为用于录制得到高动态范围HDR视频的模式;响应于开启拍摄的操作,处理单元,还用于基于摄像头获取第一图像序列;第一图像序列对应第一亮度场景;处理单元,还用于基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频;第一动态元数据中包括预设亮度;通信单元,还用于将第一HDR视频发送至第二设备。
在一种可能的实现方式中,处理单元,还用于基于摄像头继续获取第二图像序列;其中,第二图像序列对应第二亮度场景;第一亮度场景与第二亮度场景不同;处理单元,还用于基于第一图像序列、第二图像序列、第一亮度场景对应的第一动态元数据、以及第二亮度场景对应的第二动态元数据,编码得到第一HDR视频。
在一种可能的实现方式中,处理单元,还用于具体用于:对第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;对图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;对伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,3D查找表处理后的第一图像序列中包括第一图像序列对应的第一静态元数据;基于3D查找表处理后的第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。
在一种可能的实现方式中,第一HDR视频中包括第一静态元数据以及第一动态元数据。
第六方面,本申请实施例提供一种视频处理装置,通信单元,用于获取来自于第一设备的第一HDR视频;其中,第一HDR视频中包括第一动态元数据以及第一图像序列;第一动态元数据中包括预设亮度;处理单元,用于基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频;处理单元,还用于播放第二HDR视频。
在一种可能的实现方式中,处理单元,具体用于第二设备确定亮度比例;亮度比例为第二设备的峰值亮度与预设亮度之间的比例;处理单元,还具体用于基于亮度比例对第一HDR视频进行亮度调节,得到第二HDR视频。
在一种可能的实现方式中,第一HDR视频中包括第一静态元数据以及第一动态元数据。
在一种可能的实现方式中,当第二设备确定支持对第一静态元数据进行处理时,处理单元,还用于将第二HDR视频解码为第一图像序列、以及第一静态元数据;处理单元,还用于基于第一图像序列、以及第一静态元数据编码得到第三HDR视频;第二HDR视频与第三HDR视频不同。
在一种可能的实现方式中,第一HDR视频的类型为HDR10+视频,第二HDR视频的类型为HDR10+视频,第三HDR视频的类型为HDR10视频。
第七方面,本申请实施例提供一种视频处理装置,包括处理器和存储器,存储器用于存储代码指令;处理器用于运行代码指令,使得终端设备以执行如第一方面或第一方面的任一种实现方式中描述的视频处理方法,或者执行如第二方面或第二方面的任一种实现方式中描述的视频处理方法,或者执行如第三方面或第三方面的任一种实现方式中描述的视频处理方法。
第八方面,本申请实施例提供一种计算机可读存储介质,计算机可读存储介质存储有指令,当指令被执行时,使得计算机执行如第一方面或第一方面的任一种实现方式中描述的视频处理方法,或者执行如第二方面或第二方面的任一种实现方式中描述的视频处理方法,或者执行如第三方面或第三方面的任一种实现方式中描述的视频处理方法。
第九方面,一种计算机程序产品,包括计算机程序,当计算机程序被运行时,使得计算机执行如第一方面或第一方面的任一种实现方式中描述的视频处理方法,或者执行如第二方面或第二方面的任一种实现方式中描述的视频处理方法,或者执行如第三方面或第三方面的任一种实现方式中描述的视频处理方法。
应当理解的是,本申请的第四方面至第九方面与本申请的第一方面至第三方面的技术方案相对应,各方面及对应的可行实施方式所取得的有益效果相似,不再赘述。
附图说明
图1为本申请实施例提供的一种合并以及DCG的原理示意图;
图2为本申请实施例提供的一种场景示意图;
图3为本申请实施例提供的一种第一设备(或第二设备)的结构示意图;
图4为本申请实施例提供的一种第一设备的软件结构框图;
图5为本申请实施例提供的一种在电影模式下开启拍摄的界面示意图;
图6为本申请实施例提供的另一种在电影模式下开启拍摄的界面示意图;
图7为本申请实施例提供的一种查看功能详情的界面示意图;
图8为本申请实施例提供的一种视频处理方法的流程示意图;
图9为本申请实施例提供的一种图像序列与亮度场景的示意图;
图10为本申请实施例提供的一种查看HDR10+视频的界面示意图;
图11为本申请实施例提供的一种设备分享的界面示意图;
图12为本申请实施例提供的一种显示提示信息的界面示意图;
图13为本申请实施例提供的一种播放HDR10+视频的流程示意图;
图14为本申请实施例提供的另一种视频处理方法的流程示意图;
图15为本申请实施例提供的一种视频处理装置的结构示意图;
图16为本申请实施例提供的另一种终端设备的硬件结构示意图;
图17为本申请实施例提供的一种芯片的结构示意图。
具体实施方式
本申请涉及拍摄领域,为了便于理解本申请提供的方法,下面对拍摄领域的一些术语进行介绍。
1.合并(binning)
binning是一种图像读出模式,将相邻的像元中感应的电荷被加在一起,以一个像素的模式读出。例如,电子设备在拍摄图像的过程中,目标对象反射的光线被摄像头采集,以使得该反射的光线传输至图像传感器。图像传感器上包括多个感光元件,每个感光元件采集到的电荷为一个像素,并对像素信息执行binning操作。具体地说,binning可以将n×n个像素合并为一个像素。例如,binning可以将相邻的2×2个像素合成为一个像素,也就是说,相邻2×2个像素的颜色以一个像素的形式呈现。
示例性的,图1为本申请实施例提供的一种合并以及DCG的原理示意图。如图1所示,当图像为一个4×4的像素时,binning可以实现将相邻的2×2个像素合成为一个像素,使得图像传感器可以将4×4的图像合并为2×2的图像,并将该2×2的图像作为图像传感器基于binning的图像输出。
2.双转换增益(dual conversion gain,DCG)
具有双转换增益DCG能力的图像传感器,一个像素有两个势阱,两个势阱对应不同的满阱容量以及不同的转换增益CG,大满阱容量对应低转换增益(low conversion gain,LCG)、低感光度,小满阱容量对应高转换增益(high conversion gain,HCG)、高感光度。这样,传感器可以在同一场景下使用两个势阱(两种感光度)和两种转换增益,一次曝光获取两张图像:高感光模式下的图像和低感光模式下的图像。再由电子设备将获取的两张图像合成一张图像,也就是HDR技术。
示例性的,如图1所示,在将相邻的n×n个像素合成一个像素之后,图像传感器可以进一步的使用两种转换增益,例如分别基于HCG以及LCG得到两种转换增益下的出图数据,对该基于HCG的出图数据以及基于LCG的出图数据进行融合,得到融合后的图像,并将该融合后的图像作为图像传感器基于DCG的图像输出。
3.神奇日志(magic-log)技术
电影都有风格化的影调,一般专业电影拍摄的素材是低饱和度、低对比度的灰片,这样的灰片拥有更多的高光和阴影细节,后期的空间巨大,这就是Log视频。magic-Log技术,利用人眼对暗部亮度的变化更敏感的特点,采用拟合人眼观感的Log函数曲线,避免过曝、欠曝,保留宽广的高光、暗部和色域范围。
4.3D查找表(look up table,LUT)技术
3D LUT技术是恢复log视频色彩的调色工具,传统滤镜调整的是曝光、色温等参数,3D LUT可以实现对原素材中的RGB的色彩进行映射变换,使得基于3D LUT技术可以调出更丰富的色调。
5.HDR10视频
HDR10视频是按照静态元数据进行配置的,例如该HDR10的转换曲线PQ曲线是按照显示器的基准显示亮度进行固定映射的。其中,该HDR10视频的比特深度为10bit;该静态元数据可以满足SMPTE ST 2086或者其他标准中的定义。
6.HDR10+视频
HDR10+是以HDR10为基础继续改良的,HDR10+支持动态元数据即HDR10+可以根据视频中的不同场景,调节或强化影像亮度、对比以及色彩饱和度等,使得HDR10+视频中的每帧画面都拥有独立调节的HDR效果。其中,该HDR10+视频的比特深度为12bit;该动态元数据可以满足SMPTE ST 2094或者其他标准中的定义。
7.亮度场景
亮度场景或也可以称为亮度级别。本申请实施例中,该亮度场景可以用于区分不同图像帧所对应的亮度。其中,该亮度场景可以包括:高亮场景、中等亮度场景、以及暗光场景等。
示例性的,亮度场景可以对应于不同的亮度范围,如第一设备可以基光照强度(或称照度)等区分不同的亮度场景。例如,高亮场景对应的亮度范围可以为大于50000勒克斯(lux),中等亮度场景对应的亮度范围可以为50000lux-10lux,暗光场景对应的亮度范围可以为10lux-0lux。
可以理解的是,本申请实施例中描述的亮度场景可以不限于上述三种;并且,该三种亮度场景分别对应的亮度范围仅作为一种示例,不同亮度场景下所对应的亮度范围的取值也可以为其他数值,本申请实施例中对此不做限定。
为了便于清楚描述本申请实施例的技术方案,在本申请的实施例中,采用了“第一”、“第二”等字样对功能和作用基本相同的相同项或相似项进行区分。例如,第一值和第二值仅仅是为了区分不同的值,并不对其先后顺序进行限定。本领域技术人员可以理解“第一”、“第二”等字样并不对数量和执行次序进行限定,并且“第一”、“第二”等字样也并不限定一定不同。
需要说明的是,本申请中,“示例性的”或者“例如”等词用于表示作例子、例证或说明。本申请中被描述为“示例性的”或者“例如”的任何实施例或设计方案不应被解释为比其他实施例或设计方案更优选或更具优势。确切而言,使用“示例性的”或者“例如”等词旨在以具体方式呈现相关概念。
本申请中,“至少一个”是指一个或者多个,“多个”是指两个或两个以上。“和/或”,描述关联对象的关联关系,表示可以存在三种关系,例如,A和/或B,可以表示:单独存在A,同时存在A和B,单独存在B的情况,其中A,B可以是单数或者复数。字符“/”一般表示前后关联对象是一种“或”的关系。“以下至少一项(个)”或其类似表达,是指的这些项中的任意组合,包括单项(个)或复数项(个)的任意组合。例如,a,b,或c中的至少一项(个),可以表示:a,b,c,a和b,a和c,b和c,或a、b和c,其中a,b,c可以是单个,也可以是多个。
示例性的,图2为本申请实施例提供的一种场景示意图。如图2所示,该场景中可以包括第一设备201,以及第二设备202。在图2对应的实施例中,以第一设备201为手机、第二设备202为平板为例进行示例说明,该示例并不构成对本申请实施例的限定。
可能的实现方式中,第一设备201可以用于利用摄像头进行视频录制,并将视频内容发送至第二设备202,使得第二设备202可以用于利用显示屏进行视频播放。
在第一设备201利用摄像头进行视频录制的过程中,第一设备201可以使用magic-log技术最大化的保留摄像头传感器捕捉到的画面的动态范围信息,并通过3D LUT技术将该动态范围信息转化为不同色彩风格的HDR视频。其中,该视频可以为支持BT.2020广色域的HDR10视频。进一步的,第一设备201可以将HDR10视频发送至第二设备202。
在第二设备202利用显示器播放HDR10视频的过程中,由于HDR10视频是按照静态元数据进行配置的,例如该HDR10的转换曲线PQ曲线是按照绝对亮度进行固定映射的, 如该绝对亮度可以为第二设备202的显示器的基准显示亮度,如1000nit。因此,当该HDR10视频在峰值亮度达到1000nit的第二设备上显示时,PQ曲线则可以很好的呈现出1000nit以内的正常的亮度映射。其中,该峰值亮度可以理解为第二设备的硬件可以支持的最高亮度。
然而,当第二设备202的硬件可以支持的峰值亮度达不到1000nit,例如第二设备202的硬件可以支持的峰值亮度为500nit时,则基准显示亮度为1000nit的HDR10视频在硬件可以支持的峰值亮度为500nit的第二设备202中显示时,则第二设备202无法实现对于超过500nit以及小于1000nit的高亮场景的亮度映射,进而造成高亮场景中的高亮信息的丢失。
因此,在第一设备201利用摄像头进行视频录制,并将视频内容发送至第二设备202,使得第二设备202利用显示屏进行视频播放时,第二设备202的硬件可以支持的峰值亮度可以对基于PQ曲线得到的HDR10视频的显示产生影响。
有鉴于此,本申请实施例提供一种视频处理方法,使得第一设备可以为基于摄像头获取的多帧图像分别对应的不同亮度场景,匹配不同的动态元数据,利用不同的动态元数据对该多帧图像分别进行调整,得到HDR视频,并将HDR视频发送至第二设备,使得第二设备可以基于HDR视频、以及第二设备的硬件可以支持的峰值亮度,对HDR视频进行亮度映射,并显示亮度合适的视频内容。
可以理解的是,上述第一设备(或第二设备)也可以称为终端,(terminal)、用户设备(user equipment,UE)、移动台(mobile station,MS)、移动终端(mobile terminal,MT)等。第一设备(或第二设备)可以为支持视频录制功能(或视频播放功能)的手机(mobile phone)、智能电视、穿戴式设备、平板电脑(Pad)、带无线收发功能的电脑、虚拟现实(virtual reality,VR)设备、增强现实(augmented reality,AR)设备、工业控制(industrial control)中的无线终端、无人驾驶(self-driving)中的无线终端、远程手术(remote medical surgery)中的无线终端、智能电网(smart grid)中的无线终端、运输安全(transportation safety)中的无线终端、智慧城市(smart city)中的无线终端、智慧家庭(smart home)中的无线终端等等。本申请的实施例对第一设备(或第二设备)所采用的具体技术和具体设备形态不做限定。
因此,为了能够更好地理解本申请实施例,下面对本申请实施例的第一设备(或第二设备)的结构进行介绍。示例性的,图3为本申请实施例提供的一种第一设备(或第二设备)的结构示意图。
第一设备(或第二设备)可以包括处理器110,外部存储器接口120,内部存储器121,通用串行总线(universal serial bus,USB)接口130,充电管理模块140,电源管理模块141,天线1,天线2,移动通信模块150,无线通信模块160,音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,传感器模块180,按键190,指示器192,摄像头193,以及显示屏194等。
可以理解的是,本申请实施例示意的结构并不构成对第一设备(或第二设备)的具体限定。在本申请另一些实施例中,第一设备(或第二设备)可以包括比图示更多或更少的部件,或者组合某些部件,或者拆分某些部件,或者不同的部件布置。图示的部件可以以 硬件,软件或软件和硬件的组合实现。
处理器110可以包括一个或多个处理单元。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。处理器110中还可以设置存储器,用于存储指令和数据。
处理器110可以包括一个或多个处理单元,例如:处理器110可以包括应用处理器(application processor,AP),调制解调处理器,图形处理器(graphics processing unit,GPU),图像信号处理器(image signal processor,ISP),控制器,视频编解码器,数字信号处理器(digital signal processor,DSP),基带处理器,和/或神经网络处理器(neural-network processing unit,NPU)等。其中,不同的处理单元可以是独立的器件,也可以集成在一个或多个处理器中。
USB接口130是符合USB标准规范的接口,具体可以是Mini USB接口,Micro USB接口,USB Type C接口等。USB接口130可以用于连接充电器为第一设备(或第二设备)充电,也可以用于第一设备(或第二设备)与外围设备之间传输数据。也可以用于连接耳机,通过耳机播放音频。该接口还可以用于连接其他第一设备(或第二设备),例如AR设备等。
充电管理模块140用于从充电器接收充电输入。其中,充电器可以是无线充电器,也可以是有线充电器。电源管理模块141用于连接充电管理模块140与处理器110。
第一设备(或第二设备)的无线通信功能可以通过天线1,天线2,移动通信模块150,无线通信模块160,调制解调处理器以及基带处理器等实现。
天线1和天线2用于发射和接收电磁波信号。第一设备(或第二设备)中的天线可用于覆盖单个或多个通信频带。不同的天线还可以复用,以提高天线的利用率。
移动通信模块150可以提供应用在第一设备(或第二设备)上的包括2G/3G/4G/5G等无线通信的解决方案。移动通信模块150可以包括至少一个滤波器,开关,功率放大器,低噪声放大器(low noise amplifier,LNA)等。移动通信模块150可以由天线1接收电磁波,并对接收的电磁波进行滤波,放大等处理,传送至调制解调处理器进行解调。
无线通信模块160可以提供应用在第一设备(或第二设备)上的包括无线局域网(wirelesslocal area networks,WLAN)(如无线保真(wireless fidelity,Wi-Fi)网络),蓝牙(bluetooth,BT),全球导航卫星系统(global navigation satellite system,GNSS),调频(frequency modulation,FM)等无线通信的解决方案。
第一设备(或第二设备)通过GPU,显示屏194,以及应用处理器等实现显示功能。GPU为图像处理的微处理器,连接显示屏194和应用处理器。GPU用于执行数学和几何计算,用于图形渲染。
显示屏194用于显示图像,视频等。显示屏194包括显示面板。在一些实施例中,第一设备(或第二设备)可以包括1个或N个显示屏194,N为大于1的正整数。
第一设备(或第二设备)可以通过ISP,摄像头193,视频编解码器,GPU,显示屏194以及应用处理器等实现拍摄功能。
ISP用于处理摄像头193反馈的数据。例如,拍照时,打开快门,光线通过镜头被传递到摄像头感光元件上,光信号转换为电信号,摄像头感光元件将电信号传递给ISP处理,转化为肉眼可见的图像。ISP还可以对图像的噪点,亮度,肤色进行算法优化。ISP还可以 对拍摄场景的曝光,色温等参数优化。在一些实施例中,ISP可以设置在摄像头193中。
摄像头193用于捕获静态图像或视频。物体通过镜头生成光学图像投射到感光元件。感光元件可以是电荷耦合器件(charge coupled device,CCD)或互补金属氧化物半导体(complementary metal-oxide-semiconductor,CMOS)光电晶体管。感光元件把光信号转换成电信号,之后将电信号传递给ISP转换成数字图像信号。ISP将数字图像信号输出到DSP加工处理。DSP将数字图像信号转换成标准的RGB,YUV等格式的图像信号。
数字信号处理器用于处理数字信号,除了可以处理数字图像信号,还可以处理其他数字信号。例如,当第一设备(或第二设备)在频点选择时,数字信号处理器用于对频点能量进行傅里叶变换等。
视频编解码器用于对数字视频压缩或解压缩。第一设备(或第二设备)可以支持一种或多种视频编解码器。这样,第一设备(或第二设备)可以播放或录制多种编码格式的视频,例如:动态图像专家组(moving picture experts group,MPEG)1,MPEG2,MPEG3,MPEG4等。
摄像头193用于捕获静态图像或视频。在一些实施例中,第一设备(或第二设备)可以包括1个或N个摄像头193,N为大于1的正整数。
外部存储器接口120可以用于连接外部存储卡,例如Micro SD卡,实现扩展第一设备(或第二设备)的存储能力。外部存储卡通过外部存储器接口120与处理器110通信,实现数据存储功能。例如将音乐,视频等文件保存在外部存储卡中。
内部存储器121可以用于存储计算机可执行程序代码,可执行程序代码包括指令。内部存储器121可以包括存储程序区和存储数据区。
第一设备(或第二设备)可以通过音频模块170,扬声器170A,受话器170B,麦克风170C,耳机接口170D,以及应用处理器等实现音频功能。例如音乐播放,录音等。
音频模块170用于将数字音频信息转换成模拟音频信号输出,也用于将模拟音频输入转换为数字音频信号。扬声器170A,也称“喇叭”,用于将音频电信号转换为声音信号。第一设备(或第二设备)可以通过扬声器170A收听音乐,或收听免提通话。受话器170B,也称“听筒”,用于将音频电信号转换成声音信号。当第一设备(或第二设备)接听电话或语音信息时,可以通过将受话器170B靠近人耳接听语音。耳机接口170D用于连接有线耳机。
麦克风170C,也称“话筒”,“传声器”,用于将声音信号转换为电信号。
传感器模块180可以包括下述一种或多种传感器,例如:压力传感器,陀螺仪传感器,气压传感器,磁传感器,加速度传感器,距离传感器,接近光传感器,指纹传感器,温度传感器,触摸传感器,环境光传感器,或骨传导传感器等(图3中未示出)。
按键190包括开机键,音量键等。按键190可以是机械按键。也可以是触摸式按键。第一设备(或第二设备)可以接收按键输入,产生与第一设备(或第二设备)的用户设置以及功能控制有关的键信号输入。指示器192可以是指示灯,可以用于指示充电状态,电量变化,也可以用于指示消息,未接来电,通知等。
第一设备(或第二设备)的软件系统可以采用分层架构,事件驱动架构,微核架构,微服务架构,或云架构等,在此不再赘述。
示例性,图4为本申请实施例提供的一种第一设备的软件结构框图。
分层架构将软件分成若干个层,每一层都有清晰的角色和分工。层与层之间通过软件接口通信。在一些实施例中,将安卓Android系统分为四层,从上至下分别为应用程序层,应用程序框架层,硬件抽象层(hardware abstraction layer,HAL)层,以及内核层。
应用程序层可以包括一系列应用程序包。如图4所示,应用程序包可以包括下述一种或多种,例如:相机、设置、地图、或音乐等应用程序。
应用程序框架层为应用程序层的应用程序提供应用编程接口(application programming interface,API)和编程框架。应用程序框架层包括一些预先定义的函数。
如图4所示,应用程序框架层可以包括:媒体框架模块、以及窗口管理器等。
媒体框架模块用于对基于摄像头驱动得到的多帧图像进行编码得到视频;或者,媒体框架模块还可以用于对接收到的视频进行解码,得到多帧图像以及该多帧图像对应的元数据,如动态元数据或静态元数据。
窗口管理器用于管理窗口程序。窗口管理器可以获取显示屏大小,判断是否有状态栏,锁定屏幕,触摸屏幕,拖拽屏幕,截取屏幕等。
可能的实现方式中,该应用程序框架层中还可以包括:通知管理器、内容提供器、资源管理器、以及视图系统等。
通知管理器使应用程序可以在状态栏中显示通知信息,可以用于传达告知类型的消息,可以短暂停留后自动消失,无需用户交互。比如通知管理器被用于告知下载完成,消息提醒等。通知管理器还可以是以图表或者滚动条文本形式出现在系统顶部状态栏的通知,例如后台运行的应用程序的通知,还可以是以对话窗口形式出现在屏幕上的通知。例如在状态栏提示文本信息,发出提示音,设备振动,指示灯闪烁等。
内容提供器用来存放和获取数据,并使这些数据可以被应用程序访问。数据可以包括视频,图像,音频,拨打和接听的电话,浏览历史和书签,电话簿等。
资源管理器为应用程序提供各种资源,比如本地化字符串,图标,图片,布局文件,视频文件等等。
视图系统包括可视控件,例如显示文字的控件,显示图片的控件等。视图系统可用于构建应用程序。显示界面可以由一个或多个视图组成的。例如,包括短信通知图标的显示界面,可以包括显示文字的视图以及显示图片的视图。
硬件抽象层的目的在于将硬件抽象化,其可以为上层的应用提供统一的查询硬件设备的接口,例如遵循硬件抽象层接口描述语言(HAL interface definition language,HIDL)协议的接口。
硬件抽象层中可以包括:逐帧统计模块、以及编解码器等。
逐帧统计模块用于对于摄像头驱动得到的多帧图像进行逐帧统计,确定该多帧图像分别对应的亮度场景,并匹配相应的影调映射曲线,得到该多帧图像分别对应的动态元数据。
编解码器用于存储经由媒体框架模块进行编码或解码得到的结果。例如,当编解码器中接收到经由媒体框架模块发送的视频时,编解码器可以根据需求保存该视频。
该硬件抽象层中可以还包括:音频接口、视频接口、通话接口、以及全球定位提供(global positioning system,GPS)接口等(图4中未示出),本申请实施例中对此不做限定。
内核层是硬件和软件之间的层。内核层至少包含显示驱动,摄像头驱动,音频驱动,传感器驱动。
下面结合视频生成场景以及图3对应的实施例,示例性说明第一设备的软件以及硬件的工作流程。
S401、当触摸传感器接收到用户针对相机应用中的电影模式的触摸操作时,相应的硬件中断被发给内核层,内核层将触摸操作加工成原始输入事件(包括触摸坐标,触摸操作的时间戳等信息),原始输入事件被存储在内核层。应用程序框架层从内核层获取原始输入事件,识别该输入事件所对应的控件。进而相机应用调用应用框架层的接口,启动相机应用。S402、相机应用将用于指示对图像序列进行编码的指令,通过应用程序框架层中的媒体框架模块、以及硬件抽象层中的逐帧统计模块发送至内核层中的摄像头驱动,摄像头驱动通过摄像头捕捉图像序列。S403、摄像头驱动将获取的图像序列发送至逐帧统计模块,使得逐帧统计模块可以对获取的图像序列进行统计,并确定该多帧图像分别对应的亮度场景,并匹配相应的影调映射曲线,得到该多帧图像分别对应的动态元数据。进一步的,逐帧统计模块可以将多帧图像以及该多帧图像分别对应的动态元数据发送至媒体框架模块。S404、媒体框架模块基于多帧图像以及该多帧图像分别对应的动态元数据进行编码,得到HDR视频。S405、媒体框架模块可以将HDR视频发送至硬件抽象层中的编解码器中进行存储,使得第一设备可以实现对于HDR视频的处理以及录制。
下面以具体地实施例对本申请的技术方案以及本申请的技术方案如何解决上述技术问题进行详细说明。下面这几个具体的实施例可以独立实现,也可以相互结合,对于相同或相似的概念或过程可能在某些实施例中不再赘述。
本申请实施例中,第一设备可以采用两种用户触发方式,获取HDR10+视频。例如,第一设备可以实现用户在相机应用的电影模式中开启4K HDR功能,进而在第一设备接收到用户开启录制的操作时,第一设备可以基于该4K HDR功能录制得到HDR10+视频(如图5对应的实施例);或者,第一设备可以实现用户在相机应用的设置界面中开启电影HDR,进而在第一设备接收到用户开启录制的操作时,第一设备可以基于该4K HDR功能录制得到HDR10+视频(如图6对应的实施例)。
一种实现中,第一设备可以实现用户在相机应用的电影模式中开启4K HDR功能,进而在第一设备接收到用户开启录制的操作时,第一设备可以基于该4K HDR功能录制得到HDR10+视频。
示例性的,图5为本申请实施例提供的一种在电影模式下开启拍摄的界面示意图。
如图5所示,当第一设备接收到用户开启相机应用的操作时,第一设备可以显示图5中的a所示的界面,该界面可以为相机应用的主界面(或理解为拍照模式对应的界面)。如图5中的a所示,该界面中可以包括下述一种或多种,例如:拍照功能对应的拍照控件、预览图像、用于开启人工智能(artificial intelligence,AI)摄影功能的控件、用于开启或关闭闪光灯控件、用于对相机应用进行设置的设置控件,用于调整拍摄倍数的控件、用于翻转摄像头的控件、以及用于打开图库的控件等。该图5中的a所示的界面中还可以包括相机应用的一级菜单中的多个功能控件,例如:用于开启夜景模式的控件、用于开启人像 模式的控件、用于开启拍照模式的控件、用于开启录像模式的控件、以及用于开启电影模式的控件501等。其中,该用于打开图库的控件可用于开启图库应用程序。图库应用程序是智能手机、平板电脑等电子设备上的一款图片管理的应用程序,又可以称为“相册”,本实施例对该应用程序的名称不做限制。图库应用程序可以支持用户对存储于第一设备上的视频进行各种操作,例如浏览、编辑、删除、选择等操作。
其中,该相机应用可以为第一设备的系统支持的应用,或者该相机应用也可以为具有视频录制功能的应用等;该电影模式可以为用于获取HDR视频的拍摄模式;该开启拍摄的操作可以为语音操作、或者也可以为针对电影模式中的用于开始拍摄的控件的点击操作或滑动操作等。
在如图5中的a所示的界面中,当第一设备接收到用户触发该用于开启电影模式的控件501的操作时,第一设备可以显示如图5中的b所示的界面。该图5中的b所示的界面可以为电影模式对应的界面,该界面中可以包括下述一种或多种,例如:慢动作功能控件、4K HDR功能控件502、用于开启或关闭闪光灯的控件、LUT功能控件、用于对相机应用进行设置的设置控件、以及用于在电影模式中开启拍摄的控件503等。该图5中的b所示的界面中还可以包括相机应用的一级菜单中的多个功能控件,例如:用于开启专业模式的控件、以及用于开启更多功能的控件等,该界面中显示的其他内容可以参见图5中的a所示的界面,在此不再赘述。
可能的实现方式中,当第一设备接收到用户第一次触发该用于开启电影模式的控件501的操作时,该图5中的b所示的界面中可以包括:4K HDR功能控件502(虚线框所在范围内的控件)对应的提示信息504,该提示信息504用于指示当用户开启该4K HDR功能控件502后,第一设备将录制4K HDR10+视频。
在如图5中的b所示的界面中,4K HDR功能控件502可以为默认关闭状态,当第一设备接收到用户触发该4K HDR功能控件502的操作时,第一设备可以显示如图5中的c所示的界面。如图5中的c所示的界面,该界面中可以包括提示信息505,该提示信息505用于指示4K HDR10+模式已开启。该提示信息505可以在显示2秒钟或其他时间后消失。
进一步的,在该4K HDR功能控件502可以为开启状态的情况下,当第一设备接收到用户在图5中的c所示的界面中触发该开启拍摄的控件503的操作时,第一设备可以基于摄像头获取图像序列,并基于对图像序列的处理得到第一HDR10+视频。其中,该图5中的c所示的界面中显示的其他内容与图5中的b所示的界面类似,在此不再赘述。
可能的实现方式中,在如图5中的c所示的该4K HDR功能控件502为开启状态的情况下,当第一设备接收到用户针对该4K HDR功能控件502的操作时,第一设备可以显示图5中的d所示的界面。如图5中的d所示的界面,该界面中可以包括提示信息506,该提示信息506用于指示4K HDR10+模式已关闭。该提示信息506可以在显示2秒钟或其他时间后消失。
可以理解的是,4K是指屏幕的分辨率,4K的分辨率就是4096×2160。HDR是指屏幕的渲染技术。相比于普通的图像,4K HDR功能可以提供更多的动态范围和图像细节,能够更好地反映出真实环境中的视觉效果,该模式可以使电子设备100录制的视频拥有4K,30fps的分辨率。默认地,4K HDR功能初始处于关闭状态,这时如图5中的b所示的界面中的4K HDR功能控件502上设置有指示开关关闭的斜线。当第一设备检测到用户针对该 4K HDR功能控件502的触发操作时,第一设备开启4K HDR功能,如图5中的c所示的界面中的4K HDR功能控件502设置的用于指示开关关闭的斜线消失;进一步的,当第一设备检测到用户在如图5中的c所示的界面中,针对该4K HDR功能控件502的触发操作时,第一设备关闭4K HDR功能,如图5中的d所示的界面中的4K HDR功能控件503设置的用于指示开关关闭的斜线显示。
可以理解的是,4K HDR功能未开启时,预览画面的分辨率低于4K HDR功能开启时预览画面的分辨率,在4K HDR功能开启时,预览画面中可以显示HDR10视频。
基于此,使得用户可以根据拍摄需求对电影模式中的4K HDR控件进行灵活控制,进而实现对于HDR10+视频的录制。
另一种实现中,第一设备可以实现用户在相机应用的设置界面中开启电影HDR,进而在第一设备接收到用户开启录制的操作时,第一设备可以基于该4K HDR功能录制得到HDR10+视频。
可以理解的是,对比于图5对应的实施例,图6对应的实施中的电影模式所在的界面中没有4K HDR功能控件,因此第一设备无法基于用户对于该4K HDR功能控件的触发进行HDR10+视频录制。
示例性的,图6为本申请实施例提供的另一种在电影模式下开启拍摄的界面示意图。
在第一设备接收到用户打开电影模式的操作时,第一设备可以显示图6中的a所示的界面,该界面中可以包括:设置控件601以及用于在电影模式中开启拍摄的控件602等。其中,该界面中显示的其他内容与图5中的b所示的界面类似,在此不再赘述。相比于图5中的b所示的界面,该图6中的a所示的界面中没有4K HDR功能控件。
在第一设备接收到用户触发该设置控件601的操作时,第一设备可以显示图6中的b所示的界面,该界面可以为相机应用的设置界面。如图6中的b所示的界面,该界面中可以包括拍照对应的功能控件,例如:拍照比例功能控件(如支持4:3的拍照比例)、声控拍照功能控件、手势拍照功能控件、笑脸抓拍功能控件等,其中该手势拍照功能可以仅支持前置,手势朝向手机触发,该笑脸抓拍功能可以检测到笑脸时自动拍摄;该界面中还可以包括视频功能对应的功能控件,例如:视频分辨率功能控件、视频帧率功能控件、高效视频格式功能控件、电影HDR功能控件603、以及AI电影色调功能控件,其中该高效视频格式功能可可节省35%空间,并且用户可能在其他设备上无法播放此格式视频,该电影HDR功能可以采用10bit HDR记录视频,视频将自动切换至4K,该AI电影色调功能可以智能识别拍摄内容匹配LUT色调,并且仅在非4K HDR中支持。
在如图6中的b所示的界面中,当第一设备接收到用户开启该电影HDR功能控件603的操作时,第一设备可以显示图6中的c所示的界面,该界面中的电影HDR功能控件603为开启状态。其中,该图6中的c所示的界面中显示的其他内容与图6中的b所示的界面类似,再次不再赘述。
进一步的,在如图6中的c所示的界面中的电影HDR功能控件603为开启状态的情况下,当第一设备接收到用户退出设置界面,并接收到用户在如图6中的a所示的界面中触发该开启拍摄的控件602的操作时,第一设备可以基于摄像头获取图像序列,并基于对图像序列的处理得到第一HDR10+视频。可以理解的是,在电影HDR功能控件为开启状 态时,预览画面中可以显示HDR10视频。
基于此,使得用户可以根据拍摄需求对设置功能中的电影HDR功能控件进行灵活控制,进而实现对于HDR10+视频的录制。
可能的实现方式中,在图5对应的实施例(或图6对应的实施例)的基础上,当第一设备接收到用户查看相机的功能详情的操作时,第一设备可以显示电影模式对应的模式介绍。
示例性的,图7为本申请实施例提供的一种查看功能详情的界面示意图。如图7中的a所示的界面,该界面中可以包括:用于开启更多功能的控件701,该界面中显示的其他内容与图5中的b所示的界面类似,在此不再赘述。
在如图7中的a所示的界面中,当第一设备接收到用户针对该用于开启更多功能的控件701的操作时,第一设备可以显示图7中的b所示的界面。如图7中的b所示的界面,该界面中可以包括:HDR功能控件、慢动作功能控件、微电影功能控件、延时摄影功能控件、动态照片功能控件、用于下载更多功能的下载控件、用于调整更多控件中的各功能的位置的编辑控件、以及用于查看相机应用中的各功能的详情信息的详情控件702等。
在如图7中的b所示的界面中,当第一设备接收到用户针对该详情控件702的操作时,第一设备可以显示图7中的c所示界面。如图7中的c所示的界面,该界面中可以显示相机应用中的各功能对应的详情说明,例如该界面中可以包括:HDR功能对应的详情说明,如明暗对比强烈的场景下,用不同智能曝光参数拍摄多张照片,合并为一张,高光、暗部细节可同时保留;慢动作功能对应的详情说明,如慢动作可拍摄不限时长的慢速播放视频,超级慢动作支持自动或手动拍摄超高速小视频,明亮环境下拍摄效果更佳;延时摄影功能对应的详情说明,如可将长时间录制的影像合成为短视频,在短时间内再现景物变化的过程;以及,电影模式对应的详情说明703,如可录制4K HDR10+视频,提供专业的影像解决方案等。
可以理解的是,该电影模式对应的详情说明也可以为其他内容,本申请实施例中对此不做限定。
基于此,用户可以通过如图7中的c所示的详情页面,了解相机应用中各功能的作用,进而提高用户对于相机应用的使用体验。
在图5或图6对应的实施例的基础上,第一设备可以对基于摄像头获取的图像序列进行处理,得到预览流对应的预览图像序列。并且,第一设备也可以对基于摄像头获取的图像序列进行处理得到录像流对应的HDR10+视频。其中,该处理过程可以包括图像前处理以及图像后处理。
示例性的,图8为本申请实施例提供的一种视频处理方法的流程示意图。如图8所示,第一设备的摄像头中可以包括用于支持HDR功能的图像传感器,例如该图像传感器可以实现基于DCG模式的图像输出以及基于binning模式的图像输出。
其中,该DCG模式可以支持的出图数据可以包括:帧率为30fps、支持12比特bit的数据存储、以及输出格式为RAW12;该binning模式可以支持的出图数据可以包括:帧率为30fps、支持12bit的数据存储、以及输出格式为RAW12。可以理解的是,binning只有 高10bit有数据,因此binning需要做低两位补充处理,以保证12bit的数据存储。
如图8所示,图像传感器可以基于binning输出图像数据,或者,也可以基于DCG输出图像数据。其中,binning可以基于将n(例如n可以为4)个像素合成一个像素的方式输出图像序列;DCG,在将n个像素合成一个像素之后,可以通过对基于HCG的出图数据以及基于LCG的出图数据的图像融合,输出图像序列。
进一步的,在如图8所示的图像信号处理器中,第一设备可以对图像序列进行图像前处理801,得到图像前处理801后的图像序列。
本申请实施例中,该图像前处理801(或称图像信号处理器前端处理)用于将基于摄像头获取的RAW格式的图像,处理为YUV(或理解为亮度和色度)格式的图像。
可以理解的是,该图像前处理801过程可以包括下述一种或多种:去坏点矫正处理、RAW域降噪处理、黑电平矫正处理、光学阴影矫正处理、自动白平衡处理、颜色插值处理、色彩矫正处理、色调映射处理、或图像转换处理等,本申请实施例中对该图像前处理801过程不做限定。
进一步的,第一设备可以将图像前处理后的图像序列作为预览流以及录像流。在预览流中,第一设备可以对预览流对应的图像前处理后的图像序列进行伽马(Gamma)校正处理802以及3D LUT处理803,得到预览图像序列;在录像流中,第一设备可以对预览流对应的图像前处理后的图像序列进行Gamma校正处理802、3D LUT处理803,得到录像图像序列。其中,该录像图像序列可以包括为本申请实施例中描述的第一图像序列以及第二图像序列。
在Gamma校正处理802中,Gamma校正处理用于对图像进行亮度调整,使其可以保留更多的亮部和暗部细节,压缩对比度,保留更多的色彩信息。如图5所示,第一设备可以应用log曲线,分别对预览流对应的图像前处理后的图像序列、以及录像流对应的图像前处理后的图像序列进行Gamma校正处理,进而得到预览流对应的Gamma校正处理后的图像序列、以及录像流对应的Gamma校正处理后的图像序列。
在3D LUT处理803中,3D LUT处理用于对图像中的色彩空间进行映射,使得经过3D LUT的数据可以产生不同的色彩风格。如图5所示,第一设备分别对预览流对应的Gamma校正处理后的图像序列、以及录像流对应的Gamma校正处理后的图像序列进行3D LUT色彩映射,得到预览流对应的预览图像序列,以及录像流对应的录像图像序列。
可以理解的是,该预览图像序列以及录像图像序列中的图像均可以为满足BT.2020色域的PQ曲线的图像。其中,该PQ曲线中可以支持的基准亮度为1000nit,并且该PQ曲线可以作为静态元数据存储在第一设备中;该静态元数据的格式可以满足SMPTE ST 2086或者其他自定义的格式等,本申请实施例中对该静态元数据的具体格式不做具体限定。
可以理解的是,该Gamma校正处理802、以及3D LUT处理803等可以为图像后处理(或称图像处理器后端处理)中的一部分。
可能的实现方式中,该图像后处理还可以包括:防抖处理、噪声处理、以及图像缩放处理等其他处理步骤,本申请实施例中对此不做限定。
进一步的,第一设备对预览图像序列中的图像分别进行逐帧统计处理804,确定录像图像序列中的多帧图像分别对应的影调映射曲线,生成动态元数据,使得第一设备可以利用录像图像序列以及动态元数据编码为HDR10+视频。其中,第一设备可以在接收到用户 在电影模式中结束视频录制的操作时,基于录像图像序列以及动态元数据编码为HDR10+视频。
示例性的,图9为本申请实施例提供的一种图像序列与亮度场景的示意图。在图9对应的实施例中,以帧率为30fps为例对1秒(s)内生成的多帧图像的亮度场景进行示意说明。
如图9所示,第一设备可以在33毫秒(ms)左右获取图像901、在66ms左右获取图像902、在99ms左右获取图像903、在132ms左右获取图像904,……,在233ms左右获取图像905、在266ms左右获取图像906、在299ms左右获取图像907、以及在332ms左右获取图像908等。
示例性的,用户可以在室外下获取图像901、图像902、图像903、图像904,……,,以及图像905;当用户由室外转移至室内时,用户可以在室内获取图像906、图像907以及图像908等。
可以理解的是,在用户位于室外获取的图像帧中,图像901以及图像902所处的亮度场景可以相同,例如该图像901与图像902均可以属于高亮场景;或者,图像901与图像902所处的亮度场景也可以不同,例如图像901可以属于高亮场景,图像902可以属于中亮场景等。因此,本申请实施例中描述的第一图像序列可以为某一时刻的图像帧,例如第一图像序列可以为图像901,或者,该第一图像序列也可以为某一时间段的图像帧的统称,例如第一图像序列中可以包括:图像901、图像902、图像903、图像904,……,,以及图像905。类似的,本申请实施例中描述的第二图像序列也可以为某一时刻的图像帧,例如第二图像序列可以为图像906,或者,该第二图像序列也可以为某一时间段的图像帧的统称,例如第二图像序列中可以包括:图像906、图像907、以及图像908。第一图像序列对应的亮度场景与第二图像序列对应的亮度场景不同。
本申请实施例中,该影调映射曲线可以基于基准亮度对图像中的区域进行亮度调整,使其能够保护图像中的高亮区域以及暗部区域,例如对图像中的暗部区域进行提升以及对图像中的高亮区域进行压制。其中,该影调映射曲线的基准亮度可以为预设的,例如该预设的基准亮度可以设置为400nit或其他数值。
在逐帧统计处理804中,第一设备确定录像图像序列中的多帧图像分别对应的影调映射曲线的过程可以为:第一设备可以确定录像图像序列中的多帧图像分别对应的亮度场景,进而基于亮度场景与影调映射曲线的对应关系,确定亮度场景所对应的影调映射曲线。
可以理解的是,该亮度场景可以包括:高亮场景、中等亮度场景、以及暗光场景等,且该亮度场景不限于上述3种,也可以为4种、5种、或6种等,本申请实施例中上述亮度场景中包含的场景的名称以及数量均不做限定。
具体的,第一设备可以基于预览图像的灰度直方图、预览图像的平均亮度数值等,确定录像图像序列中的多帧图像分别对应的亮度场景。
一种实现中,第一设备中可以存储有典型的亮度场景所对应的灰度直方图,因此第一设备可以分别统计录像图像序列中的多帧图像对应的灰度直方图,若预览图像的灰度直方图与典型的亮度场景所对应的灰度直方图的相似度大于一定阈值时,则第一设备可以确定预览图像所对应的亮度场景。其中,该灰度直方图用于表示预览图像中的像素点的亮度分 布情况,该亮度可以理解为图像为YUV格式时的Y通道对应的数值(或称为Y分量)。
另一种实现中,第一设备可以分别统计录像图像序列中的多帧图像中的像素点的平均亮度数值,若该平均亮度数值大于亮度场景所对应的亮度阈值时,则第一设备可以确定预览图像所对应的亮度场景。
可以理解的是,第一设备确定录像图像序列中的多帧图像分别对应的亮度场景的方法可以不限于上述两种,本申请实施例中对此不做限定。
进一步的,在逐帧统计处理804中,上述第一设备确定录像图像序列中的多帧图像分别对应的亮度场景的情况下,第一设备可以基于亮度场景与影调映射曲线的对应关系,确定亮度场景所对应的影调映射曲线,生成动态元数据。
具体的,第一设备中可以存储亮度场景与影调映射曲线的对应关系,因此第一设备可以从该对应关系中匹配得到当前的亮度场景所对应的影调映射曲线,得到动态元数据。或者,第一设备可以根据亮度场景实时确定相应的影调映射曲线,生成动态元数据。其中,该动态元数据中可以包括:影调映射曲线的基准亮度数值,例如400nit。
可以理解的是,该影调映射曲线可以以动态元数据的形式保存在第一设备中,该动态元数据的格式可以根据协议有所不同,例如该动态元数据的格式可以满足SMPTE ST 2094(支持application1、application2、application3、或者application4)、或者其他自定义的格式等,本申请实施例中对该动态元数据的具体格式不做具体限定。例如,该SMPTE ST 2094-application4中规定的动态元数据可以包括下述一种或多种:图像中的窗口的信息(窗口可以为设置在图像中的矩形区域)、窗口的尺寸和位置、窗口中最亮的像素的RGB值、窗口中像素的R、G和B中最大的平均值、窗口中明亮亮度的百分比等级、窗口中明亮亮度的等级(百分位数)、场景中最大亮度值的程度、拐点的亮度值(拐点可以理解为亮度失去线性的点)、亮度超过拐点的样本、用于校正在目标显示器上执行亮度压缩时改变的RGB值、目标显示器的亮度(或也可以称为本申请实施例中描述的预设亮度)、以及本地显示器亮度等。可以理解的是,动态元数据中的预设亮度相同,例如当动态元数据中包括第一动态元数据以及第二动态元数据时,该第一动态元数据中的预设亮度与第二动态元数据中的预设亮度相同。
进一步的,在编码805中,第一设备利用预览图像序列显示HDR10视频,以及利用录像图像序列以及动态元数据编码为第一HDR10+视频。
本申请实施例中,HDR10视频可以用于第一设备的预览显示,例如该HDR10视频可以显示在第一设备的显示屏中;第一HDR10+视频可以用于第一设备的视频录制,例如第一设备可以将录像图像序列以及动态元数据按照时间戳(或按照用于指示录像图像序列以及动态元数据属于一对数据的标识)送入视频编码器,编码得到第一HDR10+视频,该第一HDR10+视频可以保存在第一设备中,进而该第一HDR10+视频也可以基于用户的播放操作显示在第一设备(或第二设备)中。
基于此,使得第一设备可以为基于摄像头获取的多帧图像分别对应的不同亮度场景,匹配不同的动态元数据,利用不同的动态元数据对该多帧图像分别进行调整,得到HDR10+视频。
在图8对应的实施例中第一设备编码得到HDR10+视频的基础上,第一设备可以将该 HDR10+视频保存在图库应用程序中。
示例性的,图10为本申请实施例提供的一种查看HDR10+视频的界面示意图。
当第一设备接收到用户打开图库应用程序的操作时,第一设备可以显示图10中的a所示的界面,该界面中可以显示今天拍摄得到的视频1001、以及昨天拍摄得到的视频1002、和图片1003。其中,该视频1001的周围可以显示用于指示视频1001为HDR10+视频的标识1004,该标识1004可以显示为HDR。
进一步的,在如图10中的a所示的界面中,当第一设备接收到用户触发该视频1001的操作时,第一设备可以显示图10中的b所示的界面。如图10中的b所示的界面,该界面中可以包括:用于指示视频1001为HDR10+视频的标识1005、用于查看视频的更多信息的控件、用于分享视频的控件、用于收藏视频的控件、用于编辑视频的控件、用于删除视频的控件、以及用于查看更多功能的控件等。
基于此,使得用户可以根据标识准确的在图库应用程序中找到HDR10+视频,增加用户查看HDR10+视频的便捷性。
在图10对应的实施例的基础上,当第一设备接收到用户分享HDR10+视频的操作时,第一设备可以实现将该HDR10+视频分享至第二设备。
本申请实施例中,该用户分享HDR10+视频的操作可以为:用户通过蓝牙对HDR10+视频的分享操作,或者也可以为用户通过WLAN等网络对HDR10+视频的分享操作,或者用户也可以通过设备分享将该HDR10+视频分享至其他设备,本申请实施例中对该分享操作不做具体限定。
示例性的,该分享操作可以为“荣耀分享”,即通过蓝牙进行设备扫描,并使用WLAN进行数据传输的分享方式。图11为本申请实施例提供的一种设备分享的界面示意图。
当第一设备接收到用户分享该HDR10+视频的操作,例如在如图10中的b所示的界面中接收到用户针对该用于分享视频的控件的操作时,第一设备可以显示图11中的a所示的界面,该界面中可以包括提示信息1101,该提示信息1101中可以包括用于将该HDR10+视频分享至第二设备的控件1102。可以理解的是,该提示信息1101中还可以包括用于将该视频HDR10+视频分享至其他应用的控件等内容。
可能的实现方式中,如图11中的a所示的该HDR10+视频的周围还可以显示用于指示该HDR10+视频的标识1104,例如该标识可以为HDR。
在如图11中的a所示的界面中,当第一设备接收到用户针对该控件1102的操作时,第一设备可以实现将该HDR10+视频分享至第二设备,并且该第二设备可以显示如图11中的b所示的界面。如图11中的b所示的界面,该第二设备的界面中可以显示提示信息1103,该提示信息1103用于指示接收到的HDR10+视频是基于动态元数据生成的,例如该提示信息1103可以显示为:第一设备想向您分享一个包含动态元数据的HDR10+视频(1.65GB),是否接收,该提示信息中可以包括:拒绝控件、以及接收控件等。该图11中的b所示的界面中还可以包括文件管理应用控件、电子邮件应用控件、音乐应用控件以及计算机应用控件等。
可能的实现方式中,如图11中的b所示的该HDR10+视频的周围还可以显示用于指示该HDR10+视频的标识1105,例如该标识可以为HDR。
在如图11中的b所示的界面中,当第二设备接收到用户针对该接收控件的操作时,第二设备可以保存该HDR10+视频。
基于此,第一设备可以通过设备分享将HDR10+视频分享至第二设备,使得第二设备可以在本设备中播放HDR10+视频。
在图11对应的实施例的基础上,可能的实现方式中,当第二设备接收到用户播放该HDR10+视频的操作时,第二设备可以显示提示信息,该提示信息用于指示第二设备将基于动态元数据对HDR10+视频进行视频播放。
示例性的,图12为本申请实施例提供的一种显示提示信息的界面示意图。当第二设备接收到用户播放该HDR10+视频的操作时,第二设备可以显示如图12所示的界面,该界面中可以包括:提示信息1201、确认控件1202、用于结束播放HDR10+视频的控件、用于打开图库应用的控件、以及用于查看更多功能的控件等。其中,该提示信息1201用于指示当前视频的播放形式,例如该提示信息1201可以显示为当前设备将基于动态元数据进行HDR视频播放。
可能的实现方式中,该图12所示的界面中该HDR10+视频的周围还可以显示用于指示该HDR10+视频的标识1203,例如该标识可以为HDR。
可能的实现方式中,当第二设备不支持动态元数据时,第二设备也可以基于静态元数据进行视频播放,此时第二设备中可以不显示提示信息1201。
进一步的,在如图12所示的界面中,当第二设备接收到用户针对该确认控件1202的操作时,第二设备可以对该HDR10+视频进行解析,并进行播放。例如,第二设备可以基于图13对应的实施例中的视频处理流程对HDR10+视频进行解析以及播放。
示例性的,图13为本申请实施例提供的一种播放HDR10+视频的流程示意图。如图13所示,该播放HDR10+视频的过程可以包括:视频解码处理1301。在视频解码处理1301中,第二设备可以基于SMPTE ST 2094-application4的视频标准将HDR10+视频解码为动态元数据以及第三图像序列。
本申请实施例中,当第二设备接收到第一设备发送的第一HDR10+视频时,第二设备可以确定该第一HDR10+视频的视频标准,例如该第一HDR10+视频支持SMPTE ST2094-application4、还是SMPTE ST 2086等;进一步的,当第二设备支持SMPTE ST2094-application4的视频标准时,则第二设备可以通过对第一HDR10+视频的解码得到动态元数据以及第三图像序列(该第三图像序列可以为HDR静态图像);或者,当第二设备支持SMPTE ST 2086的视频标准时,则第二设备可以通过对第一HDR10+视频的解码得到静态元数据以及第三图像序列。
如图13所示,当第二设备支持动态元数据时,该播放HDR视频的过程可以包括:基于动态元数据的色调映射处理1302、用户界面(user interface,UI)色调映射处理1303、对动态HDR图像以及HDR UI进行叠加处理1304、以及基于显示屏的色调映射处理1305等步骤。
在基于动态元数据的色调映射处理1302中,第二设备可以基于SMPTE ST2094-application4,对第三图像序列中的每帧图像按照其对应的动态元数据进行色调映射,得到色调映射后的图像序列;进一步的,第二设备还可以基于第二设备的硬件可以支持的 峰值亮度对该色调映射后的图像序列中的图像亮度进行调整,例如可以基于动态元数据的基准亮度(例如400nit)与该第二设备的硬件可以支持的峰值亮度(例如500nit)之间的比例关系,对该调映射后的图像序列中的图像的亮度进行比例调整,得到动态HDR图像序列。
在UI色调映射处理1303中,第二设备可以基于预设的色调映射规则对标准动态范围(standard dynamic range,SDR)UI图标进行色调调整,得到HDR UI。
在对动态HDR图像以及HDR UI进行叠加处理1304中,第二设备可以将该动态HDR图像序列中的每帧图像分别与HDR UI进行叠加,得到混合HDR图像序列。
在基于显示屏的色调映射处理1305中,第二设备可以基于显示屏的色调映射,将混合HDR图像序列中的图像处理为显示色彩空间的图像序列,进而得到HDR10+视频。
可能的实现方式中,当第二设备确定支持静态元数据时,第二设备可以利用第三图像序列以及静态元数据生成HDR10视频。可以理解的是,第二设备也可以基于图13对应的实施例基于静态元数据以及第三图像序列,得到HDR10视频,在此不再赘述。
基于此,第二设备可以实现对于第一设备发送的HDR10+视频的解码以及播放。
可以理解的是,本申请实施例提供的界面仅作为一种示例,并不能构成对本申请实施例的进一步限定。
为更清楚说明上述实施例中的内容,示例性的,图14为本申请实施例提供的另一种视频处理方法的流程示意图。
如图14所示,视频处理方法可以包括如下步骤:
S1401、第一设备接收在电影模式中开启拍摄的操作。
本申请实施例中,电影模式为用于录制得到高动态范围HDR视频的模式;该开启拍摄的操作可以为图5对应的实施例中针对该开启拍摄的控件503的操作,或者,该开启拍摄的操作可以为图6对应的实施例中针对该开启拍摄的控件602的操作等。
S1402、响应于开启拍摄的操作,第一设备基于摄像头获取第一图像序列。
其中,第一图像序列对应第一亮度场景,该第一亮度场景的确定方法可以参见图8对应的实施例中的描述,在此不再赘述。
S1403、第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。
其中,该第一HDR视频可以为本申请实施例中描述的第一HDR10+视频;第一动态元数据中包括预设亮度。
S1404、第二设备获取来自于第一设备的第一HDR视频。
示例性的,第二设备可以基于图11对应的实施例获取来自第一设备的第一HDR视频。
可能的实现方式中,该第一HDR视频中可以兼容动态元数据以及静态元数据,使得支持动态元数据的第二设备可以利用动态元数据对第一HDR视频中的内容进行播放,或者支持静态元数据的第二设备也可以利用静态元数据对第一HDR视频中的内容进行播放。
S1405、第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频。
其中,该第二HDR视频可以为本申请实施例中描述的第二HDR10+视频。示例性的, 当该第一HDR视频对应的动态元数据中指示第一HDR视频的预设亮度为400nit时,则第二设备可以基于该400nit对第一HDR视频中的图像进行调节,使得第一HDR视频中的图像的亮度均保持最多400nit。
可能的实现方式中,当该第一HDR视频对应的动态元数据中指示第一HDR视频的预设亮度为400nit时,且第二设备的峰值亮度为700nit时,则第二设备可以基于该400nit与700nit之间的比例关系,对第一HDR视频中的图像的亮度按照该比例关系适应增大,使得第一HDR视频中400nit的图像均可以显示在700nit的第二设备的显示屏中。
S1406、第二设备播放第二HDR视频。
其中,第二设备可以基于图12对应的实施例播放第二HDR视频。
基于此,第一设备可以为基于摄像头获取的第一图像序列并为第一图像序列所对应的亮度场景匹配动态元数据,利用动态元数据对该第一图像序列进行调整,得到第一HDR视频,并将第一HDR视频发送至第二设备,使得第二设备可以基于动态元数据中指示的预设亮度对该第一HDR视频进行亮度映射,并显示亮度合适的视频内容。
在一种可能的实现方式中,第二设备基于预设亮度对第一HDR视频进行亮度调节,得到第二HDR视频,S1406包括:第二设备确定亮度比例;亮度比例为第二设备的峰值亮度与预设亮度之间的比例;第二设备基于亮度比例对第一HDR视频进行亮度调节,得到第二HDR视频。
在一种可能的实现方式中,还包括:第一设备基于摄像头继续获取第二图像序列;其中,第二图像序列对应第二亮度场景;第一亮度场景与第二亮度场景不同;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于第一图像序列、第二图像序列、第一亮度场景对应的第一动态元数据、以及第二亮度场景对应的第二动态元数据,编码得到第一HDR视频。
在一种可能的实现方式中,S1403之前,方法还包括:第一设备对第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;第一设备对图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;第一设备对伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,3D查找表处理后的第一图像序列中包括第一图像序列对应的第一静态元数据;第一设备基于第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:第一设备基于3D查找表处理后的第一图像序列、以及第一亮度场景对应的第一动态元数据编码得到第一HDR视频。
其中,该图像前处理、伽马校正处理、以及3D查找表处理步骤可以参见图8对应的实施例中的描述,在此不再赘述。
在一种可能的实现方式中,第一HDR视频中包括第一静态元数据以及第一动态元数据。
在一种可能的实现方式中,方法还包括:当第二设备确定支持对第一静态元数据进行处理时,第二设备将第二HDR视频解码为第一图像序列、以及第一静态元数据; 第二设备基于第一图像序列、以及第一静态元数据编码得到第三HDR视频;第二HDR视频与第三HDR视频不同。
在一种可能的实现方式中,第一HDR视频的类型为HDR10+视频,第二HDR视频的类型为HDR10+视频,第三HDR视频的类型为HDR10视频。
在一种可能的实现方式中,S1401包括:第一设备接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备显示第一界面;第一界面中包括:用于录制得到HDR视频的控件、以及用于开启拍摄的控件;在用于录制得到HDR视频的控件的状态为关闭状态时,第一设备接收用于开启用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备显示第二界面;第二界面中包括:用于指示4K HDR10+模式已开启的提示信息;在用于录制得到HDR视频的控件的状态为开启状态时,第一设备接收针对用于开启拍摄的控件的操作。
其中,该用于打开电影模式的操作可以为5中的a所示的界面针对用于开启电影模式的控件501的操作;第一界面可以为图5中的b所示的界面,用于录制得到HDR视频的控件可以为图5中的b所示的4K HDR功能控件502,用于开启拍摄的控件可以为图5中的b所示的开启拍摄的控件503;第二界面可以为图5中的c所示的界面,用于指示4K HDR10+模式已开启的提示信息可以为图5中的c所示的提示信息505。
在一种可能的实现方式中,方法还包括:在用于录制得到HDR视频的控件的状态为开启状态时,第一设备接收用于关闭用于录制得到HDR视频的控件的操作;响应于用于录制得到HDR视频的控件的操作,第一设备显示第三界面;第三界面中包括:用于指示4K HDR10+模式已关闭的提示信息。
其中,第三界面可以为图5中的d所示的界面,该用于指示4K HDR10+模式已关闭的提示信息可以为图5中的d所示的提示信息506。
在一种可能的实现方式中,方法还包括:第一设备接收到第一次打开电影模式的操作;响应于第一次打开电影模式的操作,第一设备显示第四界面;第四界面中包括:用于录制得到HDR视频的控件、以及用于指示开启用于录制得到HDR视频的控件后将录制4K HDR10+视频的提示信息。
其中,第四界面可以为图5中的b所示的界面,该用于录制得到HDR视频的控件可以为图5中的b所示的提示信息504。
在一种可能的实现方式中,第一设备接收在电影模式中开启拍摄的操作,包括:第一设备接收用于打开电影模式的操作;响应于打开电影模式的操作,第一设备显示第五界面;第五界面中包括:用于查看第一应用对应的设置项的控件、以及用于开启拍摄的控件;第一设备接收针对用于查看第一应用对应的设置项的控件的操作;响应于查看第一应用对应的设置项的控件的操作,第一设备显示第六界面;第六界面中包括:用于在电影模式中采用10比特HDR记录视频并将视频切换至4K的第一控件;在第一控件的状态为开启状态下,第一设备接收针对用于开启拍摄的控件的操作。
其中,第五界面可以为图6中的a所示的界面,该用于查看第一应用对应的设置项的控件可以为图6中的a所示的设置控件601,该用于开启拍摄的控件可以为图6中的a所示的开启拍摄的控件602;第六界面可以为图6中的b所示的界面,第一控件可以为图6中的b所示的电影HDR功能控件603。
在一种可能的实现方式中,方法还包括:第一设备接收针对用于查看第一应用中的功能详情的控件的操作;响应于查看第一应用中的功能详情的控件的操作,第一设备显示第七界面;其中,第七界面中包括:电影模式对应的功能详情,电影模式的功能详情用于指示电影模式可录制4K HDR10+视频。
其中,第七界面可以为图7中的c所示的界面,该电影模式对应的功能详情可以为图7中的c所示的电影模式对应的详情说明703。
在一种可能的实现方式中,方法还包括:第一设备接收用于打开第二应用的操作;响应于打开第二应用的操作,第一设备显示第八界面;其中,第八界面中包括:第一HDR视频、以及第一HDR视频对应的标识;标识用于指示第一HDR视频的类型;第一设备接收针对第一HDR视频的操作;响应于针对第一HDR视频的操作,第一设备显示第九界面;第九界面中包括:标识。
其中,第二应用可以为本申请实施例中的图库应用程序,第八界面可以为图10中的a所示的界面,该标识可以为图10中的a所示的标识1004;第九界面可以为图10中的b所示的界面,该标识可以为图10中的b所示的标识1005。
在一种可能的实现方式中,第二设备获取来自于第一设备的第一HDR视频之后,方法还包括:第二设备显示第十界面;其中,第十界面中包括:用于指示第一HDR视频为包含动态元数据的HDR10+视频的提示信息、用于允许接收第一HDR视频的控件、以及用于拒绝接收第一HDR视频的控件;第二设备接收针对用于允许接收第一HDR视频的控件的操作;响应于允许接收第一HDR视频的控件的操作,第二设备显示第十一界面;其中,第十一界面中包括用于指示基于动态元数据对第一HDR视频进行播放的提示信息。
其中,第十界面可以为图11中的b所示的界面,该于指示第一HDR视频为包含动态元数据的HDR10+视频的提示信息可以为图11中的b所示的提示信息1103;该第十一界面可以为图12所示的界面,该用于指示基于动态元数据对第一HDR视频进行播放的提示信息可以为图12中的提示信息1201。
上面结合图5-图14,对本申请实施例提供的方法进行了说明,下面对本申请实施例提供的执行上述方法的装置进行描述。如图15所示,图15为本申请实施例提供的一种视频处理装置的结构示意图,该视频处理装置可以是本申请实施例中的终端设备,也可以是终端设备内的芯片或芯片系统。其中,该视频处理装置可以为第一设备中的装置,或者也可以为第二设备中的装置。
如图15所示,视频处理装置150可以用于通信设备、电路、硬件组件或者芯片中,该视频处理装置包括:显示单元1501、以及处理单元1502以及通信单元1503等。其中,显示单元1501用于支持视频处理方法执行的显示的步骤;处理单元1502用于支持视频处理装置执行信息处理的步骤。
处理单元1502可以和显示单元1501可以集成在一起,处理单元1502和显示单元1501可能会发生通信。
在一种可能的实现方式中,该视频处理装置还可以包括:存储单元1504。其中,存储单元1504可以包括一个或者多个存储器,存储器可以是一个或者多个设备、电路 中用于存储程序或者数据的器件。
存储单元1504可以独立存在,通过通信总线与处理单元1502相连。存储单元1504也可以和处理单元1502集成在一起。
以视频处理装置可以是本申请实施例中的终端设备的芯片或芯片系统为例,存储单元1504可以存储终端设备的方法的计算机执行指令,以使处理单元1502执行上述实施例中终端设备的方法。存储单元1504可以是寄存器、缓存或者随机存取存储器(random access memory,RAM)等,存储单元1504可以和处理单元1502集成在一起。存储单元1504可以是只读存储器(read-only memory,ROM)或者可存储静态信息和指令的其他类型的静态存储设备,存储单元1504可以与处理单元1502相独立。
在一种可能的实现方式中,视频处理装置还可以包括:通信单元1503。其中,通信单元1503用于支持视频处理装置与其它设备交互。示例性的,当该视频处理装置是终端设备时,该通信单元1503可以是通信接口或接口电路。当该视频处理装置是终端设备内的芯片或芯片系统时,该通信单元1503可以是通信接口。例如通信接口可以为输入/输出接口、管脚或电路等。
本实施例的装置对应地可用于执行上述方法实施例中执行的步骤,其实现原理和技术效果类似,此处不再赘述。
图16为本申请实施例提供的另一种终端设备的硬件结构示意图,如图16所示,该终端设备包括处理器1601,通信线路1604以及至少一个通信接口(图16中示例性的以通信接口1603为例进行说明)。
处理器1601可以是一个通用中央处理器(central processing unit,CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制本申请方案程序执行的集成电路。
通信线路1604可包括在上述组件之间传送信息的电路。
通信接口1603,使用任何收发器一类的装置,用于与其他设备或通信网络通信,如以太网,无线局域网(wireless local area networks,WLAN)等。
可能的,该终端设备还可以包括存储器1602。
存储器1602可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(electrically erasable programmable read-only memory,EEPROM)、只读光盘(compact disc read-only memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过通信线路1604与处理器相连接。存储器也可以和处理器集成在一起。
其中,存储器1602用于存储执行本申请方案的计算机执行指令,并由处理器1601来控制执行。处理器1601用于执行存储器1602中存储的计算机执行指令,从而实现本申请实施例所提供的方法。
可能的,本申请实施例中的计算机执行指令也可以称之为应用程序代码,本申请实施 例对此不作具体限定。
在具体实现中,作为一种实施例,处理器1601可以包括一个或多个CPU,例如图16中的CPU0和CPU1。
在具体实现中,作为一种实施例,终端设备可以包括多个处理器,例如图16中的处理器1601和处理器1605。这些处理器中的每一个可以是一个单核(single-CPU)处理器,也可以是一个多核(multi-CPU)处理器。这里的处理器可以指一个或多个设备、电路、和/或用于处理数据(例如计算机程序指令)的处理核。
示例性的,图17为本申请实施例提供的一种芯片的结构示意图。芯片170包括一个或两个以上(包括两个)处理器1720和通信接口1730。
在一些实施方式中,存储器1740存储了如下的元素:可执行模块或者数据结构,或者他们的子集,或者他们的扩展集。
本申请实施例中,存储器1740可以包括只读存储器和随机存取存储器,并向处理器1720提供指令和数据。存储器1740的一部分还可以包括非易失性随机存取存储器(non-volatile random access memory,NVRAM)。
本申请实施例中,存储器1740、通信接口1730以及处理器1720通过总线系统1710耦合在一起。其中,总线系统1710除包括数据总线之外,还可以包括电源总线、控制总线和状态信号总线等。为了便于描述,在图17中将各种总线都标为总线系统1710。
上述本申请实施例描述的方法可以应用于处理器1720中,或者由处理器1720实现。处理器1720可能是一种集成电路芯片,具有信号的处理能力。在实现过程中,上述方法的各步骤可以通过处理器1720中的硬件的集成逻辑电路或者软件形式的指令完成。上述的处理器1720可以是通用处理器(例如,微处理器或常规处理器)、数字信号处理器(digital signal processing,DSP)、专用集成电路(application specific integrated circuit,ASIC)、现成可编程门阵列(field-programmable gate array,FPGA)或者其他可编程逻辑器件、分立门、晶体管逻辑器件或分立硬件组件,处理器1720可以实现或者执行本发明实施例中的公开的各方法、步骤及逻辑框图。
结合本申请实施例所公开的方法的步骤可以直接体现为硬件译码处理器执行完成,或者用译码处理器中的硬件及软件模块组合执行完成。其中,软件模块可以位于随机存储器、只读存储器、可编程只读存储器或带电可擦写可编程存储器(electrically erasable programmable read only memory,EEPROM)等本领域成熟的存储介质中。该存储介质位于存储器1740,处理器1720读取存储器1740中的信息,结合其硬件完成上述方法的步骤。
在上述实施例中,存储器存储的供处理器执行的指令可以以计算机程序产品的形式实现。其中,计算机程序产品可以是事先写入在存储器中,也可以是以软件形式下载并安装在存储器中。
计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行计算机程序指令时,全部或部分地产生按照本申请实施例的流程或功能。计算机可以是通用计算机、专用计算机、计算机网络或者其他可编程装置。计算机指令可以存储在计算机可读存储介质中,或者从一个计算机可读存储介质向另一计算机可读存储介质传输,例如,计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线(digital subscriber line,DSL)或无线(例如红外、无线、微波等)方式向另一个网站 站点、计算机、服务器或数据中心进行传输。计算机可读存储介质可以是计算机能够存储的任何可用介质或者是包括一个或多个可用介质集成的服务器、数据中心等数据存储设备。例如,可用介质可以包括磁性介质(例如,软盘、硬盘或磁带)、光介质(例如,数字通用光盘(digital versatile disc,DVD))、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本申请实施例还提供了一种计算机可读存储介质。上述实施例中描述的方法可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。计算机可读介质可以包括计算机存储介质和通信介质,还可以包括任何可以将计算机程序从一个地方传送到另一个地方的介质。存储介质可以是可由计算机访问的任何目标介质。
作为一种可能的设计,计算机可读介质可以包括紧凑型光盘只读储存器(compact disc read-only memory,CD-ROM)、RAM、ROM、EEPROM或其它光盘存储器;计算机可读介质可以包括磁盘存储器或其它磁盘存储设备。而且,任何连接线也可以被适当地称为计算机可读介质。例如,如果使用同轴电缆,光纤电缆,双绞线,DSL或无线技术(如红外,无线电和微波)从网站,服务器或其它远程源传输软件,则同轴电缆,光纤电缆,双绞线,DSL或诸如红外,无线电和微波之类的无线技术包括在介质的定义中。如本文所使用的磁盘和光盘包括光盘(CD),激光盘,光盘,数字通用光盘(digital versatile disc,DVD),软盘和蓝光盘,其中磁盘通常以磁性方式再现数据,而光盘利用激光光学地再现数据。
上述的组合也应包括在计算机可读介质的范围内。以上,仅为本发明的具体实施方式,但本发明的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本发明揭露的技术范围内,可轻易想到变化或替换,都应涵盖在本发明的保护范围之内。因此,本发明的保护范围应以权利要求的保护范围为准。

Claims (26)

  1. 一种视频处理方法,其特征在于,应用于视频处理系统,所述视频处理系统中包括:第一设备以及第二设备,所述方法包括:
    所述第一设备接收在电影模式中开启拍摄的操作;所述电影模式为用于录制得到高动态范围HDR视频的模式;
    响应于所述开启拍摄的操作,所述第一设备基于摄像头获取第一图像序列;所述第一图像序列对应第一亮度场景;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频;所述第一动态元数据中包括预设亮度;
    所述第二设备获取来自于所述第一设备的所述第一HDR视频;
    所述第二设备基于所述预设亮度对所述第一HDR视频进行亮度调节,得到第二HDR视频;
    所述第二设备播放所述第二HDR视频。
  2. 根据权利要求1所述的方法,其特征在于,所述第二设备基于所述预设亮度对所述第一HDR视频进行亮度调节,得到第二HDR视频,包括:
    所述第二设备确定亮度比例;所述亮度比例为所述第二设备的峰值亮度与所述预设亮度之间的比例;
    所述第二设备基于所述亮度比例对所述第一HDR视频进行亮度调节,得到所述第二HDR视频。
  3. 根据权利要求1或2所述的方法,其特征在于,所述方法还包括:
    所述第一设备基于摄像头继续获取第二图像序列;其中,所述第二图像序列对应第二亮度场景;所述第一亮度场景与所述第二亮度场景不同;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:所述第一设备基于所述第一图像序列、所述第二图像序列、所述第一亮度场景对应的第一动态元数据、以及所述第二亮度场景对应的第二动态元数据,编码得到所述第一HDR视频。
  4. 根据权利要求1所述的方法,其特征在于,所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频之前,所述方法还包括:
    所述第一设备对所述第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;
    所述第一设备对所述图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;
    所述第一设备对所述伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,所述3D查找表处理后的第一图像序列中包括所述第一图像序列对应的第一静态元数据;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:所述第一设备基于所述3D查找表处理后的第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到所述第一HDR视频。
  5. 根据权利要求4所述的方法,其特征在于,所述第一HDR视频中包括:所述第一静态元数据以及所述第一动态元数据。
  6. 根据权利要求5所述的方法,其特征在于,所述方法还包括:
    当所述第二设备确定支持对所述第一静态元数据进行处理时,所述第二设备将所述第二HDR视频解码为所述第一图像序列、以及所述第一静态元数据;
    所述第二设备基于所述第一图像序列、以及所述第一静态元数据编码得到第三HDR视频;所述第二HDR视频与所述第三HDR视频不同。
  7. 根据权利要求6所述的方法,其特征在于,所述第一HDR视频的类型为HDR10+视频,所述第二HDR视频的类型为所述HDR10+视频,所述第三HDR视频的类型为HDR10视频。
  8. 根据权利要求1所述的方法,其特征在于,所述第一设备接收在电影模式中开启拍摄的操作,包括:
    所述第一设备接收用于打开所述电影模式的操作;
    响应于所述打开所述电影模式的操作,所述第一设备显示第一界面;所述第一界面中包括:用于录制得到所述HDR视频的控件、以及用于开启拍摄的控件;
    在所述用于录制得到所述HDR视频的控件的状态为关闭状态时,所述第一设备接收用于开启所述用于录制得到所述HDR视频的控件的操作;
    响应于所述用于录制得到所述HDR视频的控件的操作,所述第一设备显示第二界面;所述第二界面中包括:用于指示4K HDR10+模式已开启的提示信息;
    在所述用于录制得到所述HDR视频的控件的状态为开启状态时,所述第一设备接收针对所述用于开启拍摄的控件的操作。
  9. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    在所述用于录制得到所述HDR视频的控件的状态为开启状态时,所述第一设备接收用于关闭所述用于录制得到所述HDR视频的控件的操作;
    响应于所述用于录制得到所述HDR视频的控件的操作,所述第一设备显示第三界面;所述第三界面中包括:用于指示4K HDR10+模式已关闭的提示信息。
  10. 根据权利要求8所述的方法,其特征在于,所述方法还包括:
    所述第一设备接收到第一次打开所述电影模式的操作;
    响应于所述第一次打开所述电影模式的操作,所述第一设备显示第四界面;所述第四界面中包括:所述用于录制得到所述HDR视频的控件、以及用于指示开启所述用于录制得到所述HDR视频的控件后将录制4K HDR10+视频的提示信息。
  11. 根据权利要求8-10任一项所述的方法,其特征在于,所述第一设备接收在电影模式中开启拍摄的操作,包括:
    所述第一设备接收用于打开所述电影模式的操作;
    响应于所述打开所述电影模式的操作,所述第一设备显示第五界面;所述第五界面中包括:用于查看第一应用对应的设置项的控件、以及用于开启拍摄的控件;
    所述第一设备接收针对所述用于查看第一应用对应的设置项的控件的操作;
    响应于所述查看第一应用对应的设置项的控件的操作,所述第一设备显示第六界面;所述第六界面中包括:用于在所述电影模式中采用10比特HDR记录视频并将视 频切换至4K的第一控件;
    在所述第一控件的状态为开启状态下,所述第一设备接收针对所述用于开启拍摄的控件的操作。
  12. 根据权利要求8-11任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备接收针对用于查看第一应用中的功能详情的控件的操作;
    响应于所述查看第一应用中的功能详情的控件的操作,所述第一设备显示第七界面;其中,所述第七界面中包括:所述电影模式对应的功能详情,所述电影模式的功能详情用于指示所述电影模式可录制4K HDR10+视频。
  13. 根据权利要求8-12任一项所述的方法,其特征在于,所述方法还包括:
    所述第一设备接收用于打开第二应用的操作;
    响应于所述打开第二应用的操作,所述第一设备显示第八界面;其中,所述第八界面中包括:所述第一HDR视频、以及所述第一HDR视频对应的标识;所述标识用于指示所述第一HDR视频的类型;
    所述第一设备接收针对所述第一HDR视频的操作;
    响应于所述针对所述第一HDR视频的操作,所述第一设备显示第九界面;所述第九界面中包括:所述标识。
  14. 根据权利要求1所述的方法,其特征在于,所述第二设备获取来自于所述第一设备的所述第一HDR视频之后,所述方法还包括:
    所述第二设备显示第十界面;其中,所述第十界面中包括:用于指示所述第一HDR视频为包含动态元数据的HDR10+视频的提示信息、用于允许接收所述第一HDR视频的控件、以及用于拒绝接收所述第一HDR视频的控件;
    所述第二设备接收针对所述用于允许接收所述第一HDR视频的控件的操作;
    响应于所述允许接收所述第一HDR视频的控件的操作,所述第二设备显示第十一界面;其中,所述第十一界面中包括用于指示基于动态元数据对所述第一HDR视频进行播放的提示信息。
  15. 一种视频处理方法,其特征在于,应用于第一设备,所述方法包括:
    所述第一设备接收在电影模式中开启拍摄的操作;所述电影模式为用于录制得到高动态范围HDR视频的模式;
    响应于所述开启拍摄的操作,所述第一设备基于摄像头获取第一图像序列;所述第一图像序列对应第一亮度场景;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频;所述第一动态元数据中包括预设亮度;
    所述第一设备将所述第一HDR视频发送至第二设备。
  16. 根据权利要求15所述的方法,其特征在于,所述方法还包括:
    所述第一设备基于摄像头继续获取第二图像序列;其中,所述第二图像序列对应第二亮度场景;所述第一亮度场景与所述第二亮度场景不同;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:所述第一设备基于所述第一图像序列、所述第二图像序列、所述第一亮度场景对应的第一动态元数据、以及所述第二亮度场景对应的 第二动态元数据,编码得到所述第一HDR视频。
  17. 根据权利要求15所述的方法,其特征在于,所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频之前,所述方法还包括:
    所述第一设备对所述第一图像序列进行图像前处理,得到图像前处理后的第一图像序列;
    所述第一设备对所述图像前处理后的第一图像序列进行伽马校正处理,得到伽马校正处理后的第一图像序列;
    所述第一设备对所述伽马校正处理后的第一图像序列以进行3D查找表处理,得到3D查找表处理后的第一图像序列;其中,所述3D查找表处理后的第一图像序列中包括所述第一图像序列对应的第一静态元数据;
    所述第一设备基于所述第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到第一HDR视频,包括:所述第一设备基于所述3D查找表处理后的第一图像序列、以及所述第一亮度场景对应的第一动态元数据编码得到所述第一HDR视频。
  18. 根据权利要求17所述的方法,其特征在于,所述第一HDR视频中包括所述第一静态元数据以及所述第一动态元数据。
  19. 一种视频处理方法,其特征在于,应用于第二设备,所述方法包括:
    所述第二设备获取来自于第一设备的第一HDR视频;其中,所述第一HDR视频中包括第一动态元数据以及第一图像序列;所述第一动态元数据中包括预设亮度;
    所述第二设备基于所述预设亮度对所述第一HDR视频进行亮度调节,得到第二HDR视频;
    所述第二设备播放所述第二HDR视频。
  20. 根据权利要求19所述的方法,其特征在于,所述第二设备基于所述预设亮度对所述第一HDR视频进行亮度调节,得到第二HDR视频,包括:
    所述第二设备确定亮度比例;所述亮度比例为所述第二设备的峰值亮度与所述预设亮度之间的比例;
    所述第二设备基于所述亮度比例对所述第一HDR视频进行亮度调节,得到所述第二HDR视频。
  21. 根据权利要求20所述的方法,其特征在于,所述第一HDR视频中包括第一静态元数据以及所述第一动态元数据。
  22. 根据权利要求21所述的方法,其特征在于,所述方法还包括:
    当所述第二设备确定支持对所述第一静态元数据进行处理时,所述第二设备将所述第二HDR视频解码为所述第一图像序列、以及所述第一静态元数据;
    所述第二设备基于所述第一图像序列、以及所述第一静态元数据编码得到第三HDR视频;所述第二HDR视频与所述第三HDR视频不同。
  23. 根据权利要求22所述的方法,其特征在于,所述第一HDR视频的类型为HDR10+视频,所述第二HDR视频的类型为所述HDR10+视频,所述第三HDR视频的类型为HDR10视频。
  24. 一种终端设备,包括存储器、处理器以及存储在所述存储器中并可在所述处 理器上运行的计算机程序,其特征在于,所述处理器执行所述计算机程序时,使得所述终端设备执行如权利要求1至14任一项所述的方法,或者,执行如权利要求15至18任一项所述的方法,或者,执行如权利要求19至23任一项所述的方法。
  25. 一种计算机可读存储介质,所述计算机可读存储介质存储有计算机程序,其特征在于,所述计算机程序被处理器执行时,使得计算机执行如权利要求1至14任一项所述的方法,或者,执行如权利要求15至18任一项所述的方法,或者,执行如权利要求19至23任一项所述的方法。
  26. 一种计算机程序产品,其特征在于,包括计算机程序,当所述计算机程序被运行时,使得计算机执行如权利要求1至14任一项所述的方法,或者,执行如权利要求15至18任一项所述的方法,或者,执行如权利要求19至23任一项所述的方法。
PCT/CN2023/071669 2022-02-28 2023-01-10 视频处理方法和装置 WO2023160295A1 (zh)

Priority Applications (1)

Application Number Priority Date Filing Date Title
EP23758920.5A EP4318383A1 (en) 2022-02-28 2023-01-10 Video processing method and apparatus

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210193750.X 2022-02-28
CN202210193750.XA CN115564659B (zh) 2022-02-28 2022-02-28 视频处理方法和装置

Publications (2)

Publication Number Publication Date
WO2023160295A1 true WO2023160295A1 (zh) 2023-08-31
WO2023160295A9 WO2023160295A9 (zh) 2024-04-11

Family

ID=84736610

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/071669 WO2023160295A1 (zh) 2022-02-28 2023-01-10 视频处理方法和装置

Country Status (3)

Country Link
EP (1) EP4318383A1 (zh)
CN (1) CN115564659B (zh)
WO (1) WO2023160295A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115564659B (zh) * 2022-02-28 2024-04-05 荣耀终端有限公司 视频处理方法和装置
CN117119291A (zh) * 2023-02-06 2023-11-24 荣耀终端有限公司 一种出图模式切换方法和电子设备

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156468A1 (en) * 2017-11-22 2019-05-23 Thomson Licensing Method and device for reconstructing a display adapted HDR image
CN112400324A (zh) * 2018-07-20 2021-02-23 交互数字Vc控股公司 用于处理视频信号的方法和装置
US20210183028A1 (en) * 2017-11-30 2021-06-17 Interdigital Vc Holdings, Inc. Saturation control for high-dynamic range reconstruction
CN115564659A (zh) * 2022-02-28 2023-01-03 荣耀终端有限公司 视频处理方法和装置

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP5528209B2 (ja) * 2010-05-20 2014-06-25 キヤノン株式会社 画像処理装置および画像処理方法
EP3687162B1 (en) * 2017-09-21 2022-11-16 Sony Group Corporation Reproduction device, reproduction method and recording medium
EP3694216A4 (en) * 2017-10-06 2020-12-02 Panasonic Intellectual Property Management Co., Ltd. IMAGE DISPLAY SYSTEM AND IMAGE DISPLAY METHOD
CA2986520A1 (en) * 2017-11-22 2019-05-22 Thomson Licensing Method and device for reconstructing a display adapted hdr image
EP3525463A1 (en) * 2018-02-13 2019-08-14 Koninklijke Philips N.V. System for handling multiple hdr video formats
CN112075083B (zh) * 2018-06-25 2023-04-04 华为技术有限公司 一种包含字幕的高动态范围视频处理的方法及装置
CN108900823B (zh) * 2018-07-05 2019-07-12 华为技术有限公司 一种视频信号处理的方法及装置
CN112532857B (zh) * 2019-09-18 2022-04-12 华为技术有限公司 一种延时摄影的拍摄方法及设备
CN113810596B (zh) * 2021-07-27 2023-01-31 荣耀终端有限公司 延时摄影方法和装置
CN113810602B (zh) * 2021-08-12 2023-07-11 荣耀终端有限公司 一种拍摄方法及电子设备

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190156468A1 (en) * 2017-11-22 2019-05-23 Thomson Licensing Method and device for reconstructing a display adapted HDR image
US20210183028A1 (en) * 2017-11-30 2021-06-17 Interdigital Vc Holdings, Inc. Saturation control for high-dynamic range reconstruction
CN112400324A (zh) * 2018-07-20 2021-02-23 交互数字Vc控股公司 用于处理视频信号的方法和装置
CN115564659A (zh) * 2022-02-28 2023-01-03 荣耀终端有限公司 视频处理方法和装置

Also Published As

Publication number Publication date
WO2023160295A9 (zh) 2024-04-11
CN115564659B (zh) 2024-04-05
EP4318383A1 (en) 2024-02-07
CN115564659A (zh) 2023-01-03

Similar Documents

Publication Publication Date Title
CN112532857B (zh) 一种延时摄影的拍摄方法及设备
WO2020253719A1 (zh) 一种录屏方法及电子设备
WO2023160295A1 (zh) 视频处理方法和装置
CN112532892B (zh) 图像处理方法及电子装置
WO2023016039A1 (zh) 视频处理方法、装置、电子设备和存储介质
WO2021057277A1 (zh) 一种暗光下拍照的方法及电子设备
CN115526787B (zh) 视频处理方法和装置
US20230162324A1 (en) Projection data processing method and apparatus
WO2023005298A1 (zh) 基于多摄像头的图像内容屏蔽方法和装置
WO2023077939A1 (zh) 摄像头的切换方法、装置、电子设备及存储介质
WO2023016067A1 (zh) 视频处理方法、装置及电子设备
CN114466134A (zh) 生成hdr图像的方法及电子设备
WO2022166521A1 (zh) 跨设备的协同拍摄方法、相关装置及系统
WO2023226612A1 (zh) 一种曝光参数确定方法和装置
EP4224870A1 (en) Video editing method and electronic device
WO2022267608A1 (zh) 一种曝光强度调节方法及相关装置
CN113891008B (zh) 一种曝光强度调节方法及相关设备
CN115705663B (zh) 图像处理方法与电子设备
CN116048323B (zh) 图像处理方法及电子设备
CN115460343B (zh) 图像处理方法、设备及存储介质
CN112188179B (zh) 图像缩略图显示方法、图像缩略图显示装置及存储介质
WO2024082863A1 (zh) 图像处理方法及电子设备
US20240155236A1 (en) Image processing method and electronic device
CN117692714A (zh) 视频显示方法和电子设备
CN117956299A (zh) 拍摄月亮的方法和电子设备

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23758920

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023758920

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023758920

Country of ref document: EP

Effective date: 20231026

WWE Wipo information: entry into national phase

Ref document number: 18558829

Country of ref document: US