WO2022213716A1 - 图像格式转换方法、装置、设备、存储介质及程序产品 - Google Patents

图像格式转换方法、装置、设备、存储介质及程序产品 Download PDF

Info

Publication number
WO2022213716A1
WO2022213716A1 PCT/CN2022/075034 CN2022075034W WO2022213716A1 WO 2022213716 A1 WO2022213716 A1 WO 2022213716A1 CN 2022075034 W CN2022075034 W CN 2022075034W WO 2022213716 A1 WO2022213716 A1 WO 2022213716A1
Authority
WO
WIPO (PCT)
Prior art keywords
dynamic range
global
image
range image
standard dynamic
Prior art date
Application number
PCT/CN2022/075034
Other languages
English (en)
French (fr)
Inventor
张琦
胡伟东
Original Assignee
北京百度网讯科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京百度网讯科技有限公司 filed Critical 北京百度网讯科技有限公司
Priority to JP2022555980A priority Critical patent/JP2023524624A/ja
Priority to US17/939,401 priority patent/US20230011823A1/en
Publication of WO2022213716A1 publication Critical patent/WO2022213716A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/90Dynamic range modification of images or parts thereof
    • G06T5/92Dynamic range modification of images or parts thereof based on global image properties
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/60Image enhancement or restoration using machine learning, e.g. neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/42Global feature extraction by analysis of the whole pattern, e.g. using frequency domain transformations or autocorrelation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • G06V10/806Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level of extracted features
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20208High dynamic range [HDR] image processing

Definitions

  • the present disclosure relates to the field of artificial intelligence, in particular to the technical fields of computer vision and deep learning, and can be applied to intelligent sensing ultra-clear scenarios, and in particular to an image format conversion method, apparatus, electronic device, computer-readable storage medium, and computer program product .
  • SDR Standard Dynamic Range
  • HDR High Dynamic Range, High-Dynamic Range
  • the prior art provides the following solutions for converting an image in SDR format to HDR: a solution for reconstructing an HDR image based on multiple frames of SDR images with different exposure times, a solution for reconstructing an HDR image from an SDR image based on a camera response curve, and a solution based on image decomposition.
  • SDR image reconstruction HDR image scheme a solution for reconstructing an HDR image based on multiple frames of SDR images with different exposure times.
  • Embodiments of the present disclosure provide an image format conversion method, apparatus, electronic device, computer-readable storage medium, and computer program product.
  • an embodiment of the present disclosure proposes an image format conversion method, including: acquiring a standard dynamic range image to be converted; performing a convolution operation on the standard dynamic range image to obtain local features; performing global averaging on the standard dynamic range image Pooling operation to obtain global features; standard dynamic range images are converted into high dynamic range images according to local features and global features.
  • an embodiment of the present disclosure provides an image format conversion apparatus, including: a standard dynamic range image acquisition unit configured to acquire a standard dynamic range image to be converted; a local feature acquisition unit configured to acquire a standard dynamic range image The image performs a convolution operation to obtain local features; the global feature acquisition unit is configured to perform a global average pooling operation on the standard dynamic range image to obtain global features; the high dynamic range image conversion unit is configured to be based on local features and global features Convert standard dynamic range images to high dynamic range images.
  • embodiments of the present disclosure provide an electronic device, the electronic device comprising: at least one processor; and a memory communicatively connected to the at least one processor; wherein the memory stores instructions executable by the at least one processor , the instruction is executed by at least one processor, so that when the at least one processor is executed, the image format conversion method described in any implementation manner of the first aspect can be implemented.
  • an embodiment of the present disclosure provides a non-transitory computer-readable storage medium storing computer instructions, where the computer instructions are used to enable a computer to implement the image format conversion described in any implementation manner of the first aspect when executed. method.
  • an embodiment of the present disclosure provides a computer program product including a computer program, which, when executed by a processor, can implement the image format conversion method described in any implementation manner of the first aspect.
  • a standard dynamic range image to be converted is obtained; then, a convolution operation is performed on the standard dynamic range image to obtain Local features; then, perform global average pooling operation on standard dynamic range images to obtain global features; finally, convert standard dynamic range images into high dynamic range images according to local features and global features.
  • the present disclosure uses a convolutional layer to extract local features of standard dynamic range images, and a global average pooling layer to extract global features of standard dynamic range images, Since the global features of the standard dynamic range images are directly obtained through the independent global average pooling layer, more accurate global features can be extracted, and then the picture details required for the high dynamic range images can be obtained based on more accurate supplements, thereby improving the conversion. the quality of the resulting high dynamic range image.
  • FIG. 1 is an exemplary system architecture in which the present disclosure may be applied
  • FIG. 2 is a flowchart of an image format conversion method provided by an embodiment of the present disclosure
  • FIG. 3 is a flowchart of another image format conversion method provided by an embodiment of the present disclosure.
  • FIG. 4 is a schematic flowchart of a model for converting a standard dynamic range image into a high dynamic range image according to an embodiment of the present disclosure
  • FIG. 5 is a schematic structural diagram of a GL-GConv Resblock provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic structural diagram of a SEBlock provided by an embodiment of the present disclosure.
  • FIG. 7 is a structural block diagram of an image format conversion apparatus provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device suitable for executing an image format conversion method according to an embodiment of the present disclosure.
  • the acquisition, storage and application of the user's personal information involved all comply with the relevant laws and regulations, take necessary confidentiality measures, and do not violate public order and good customs.
  • FIG. 1 illustrates an exemplary system architecture 100 to which embodiments of the image format conversion method, apparatus, electronic device, and computer-readable storage medium of the present disclosure may be applied.
  • the system architecture 100 may include terminal devices 101 , 102 , and 103 , a network 104 and a server 105 .
  • the network 104 is a medium used to provide a communication link between the terminal devices 101 , 102 , 103 and the server 105 .
  • the network 104 may include various connection types, such as wired, wireless communication links, or fiber optic cables, among others.
  • the user can use the terminal devices 101, 102, 103 to interact with the server 105 through the network 104 to receive or send messages and the like.
  • Various applications for implementing information communication between the terminal devices 101 , 102 , 103 and the server 105 may be installed, such as video-on-demand applications, image/video format conversion applications, and instant messaging applications.
  • the terminal devices 101, 102, 103 and the server 105 may be hardware or software.
  • the terminal devices 101, 102, and 103 can be various electronic devices with display screens, including but not limited to smart phones, tablet computers, laptop computers, and desktop computers, as well as other devices that can also be used to display images. Projection devices and display devices including displays, etc.; when the terminal devices 101, 102, and 103 are software, they can be installed in the electronic devices listed above, which can be implemented as multiple software or software modules, or can be implemented into a single software or software module, which is not specifically limited here.
  • the server 105 When the server 105 is hardware, it can be implemented as a distributed server cluster composed of multiple servers, or it can be implemented as a single server; when the server is software, it can be implemented as multiple software or software modules, or as a single software or software. module, which is not specifically limited here.
  • the server 105 can provide various services through various built-in applications. Taking the image format conversion application that can provide the service of converting standard dynamic range images into high dynamic range images in batches as an example, the server 105 is running the image format conversion application. The following effects can be achieved: first, obtain the standard dynamic range image to be converted from the terminal devices 101, 102, 103 through the network 104; then, perform a convolution operation on the standard dynamic range image to obtain local features; The dynamic range image performs a global average pooling operation to obtain global features; finally, the standard dynamic range image is converted into a high dynamic range image according to the local and global features.
  • the standard dynamic range image to be converted can be obtained from the terminal devices 101 , 102 , and 103 through the network 104 , and can also be pre-stored locally in the server 105 in various ways. Therefore, when the server 105 detects that the data has been stored locally (eg, the pending image format conversion task that is retained before starting processing), it can choose to obtain the data directly from the local. In this case, the exemplary system architecture 100 also Terminal devices 101, 102, 103 and network 104 may not be included.
  • the image format conversion methods provided by the subsequent embodiments of the present disclosure generally have stronger computing power and more computing power. It is executed by the server 105 of the computing resources, and accordingly, the image format conversion device is generally also provided in the server 105 . But at the same time, it should be pointed out that when the terminal devices 101, 102, and 103 also have computing capabilities and computing resources that meet the requirements, the terminal devices 101, 102, and 103 can also use the image format conversion applications installed on them to complete the above-mentioned functions. The various operations performed by the server 105, and then output the same result as the server 105.
  • the image format conversion application judges that the terminal device where it is located has strong computing capabilities and more remaining computing resources, it can be executed by the terminal device.
  • the above calculation can appropriately reduce the calculation pressure of the server 105 .
  • the image format conversion apparatus may also be provided in the terminal devices 101 , 102 , and 103 .
  • the example system architecture 100 may also not include the server 105 and the network 104 .
  • terminal devices, networks and servers in FIG. 1 are merely illustrative. There can be any number of terminal devices, networks and servers according to implementation needs.
  • FIG. 2 is a flowchart of an image format conversion method according to an embodiment of the present disclosure, wherein the process 200 includes the following steps:
  • Step 201 obtaining a standard dynamic range image to be converted
  • the purpose of this step is to obtain the standard dynamic range image to be converted by the execution body of the image format conversion method (for example, the server 105 shown in FIG. 1 ), that is, to obtain the SDR image in the format to be converted.
  • the SDR image may be obtained from the SDR video through a frame extraction technique, or may be generated directly and independently according to the SDR format.
  • Step 202 perform a convolution operation on the standard dynamic range image to obtain local features
  • this step aims to extract local features from the standard dynamic range image respectively by the above-mentioned execution subject, and the local features are obtained by performing a convolution operation on the standard dynamic range image.
  • the convolution usually has a fixed-size convolution kernel, such as 3 ⁇ 3.
  • the convolution operation is equivalent to convolving the image features of 9 pixels at a time, and the "Condensation" is one pixel point, so the convolution operation is usually also called downsampling, and because its characteristics are only local, the present disclosure performs the convolution operation in this step to extract local features.
  • the number of times of the convolution operation can be multiple times, and convolution kernels of different sizes can be used each time.
  • Step 203 perform a global average pooling operation on the standard dynamic range image to obtain global features
  • this step aims to extract global features from the standard dynamic range image by the above-mentioned executive body respectively, and the global feature is obtained by performing a global average pooling operation on the standard dynamic range image.
  • Global average pooling is a concept that appears in machine learning algorithms. Its full English name is: Global Average Pooling. Its normal operation is to add and draw all pixel values of the feature map to obtain a value, that is, use the value to represent the corresponding feature. Figure, that is, the value obtained integrates all the pixels of the entire feature map, so it can reflect the global features as much as possible.
  • steps 202 and 203 there is no causality and dependency between the acquisition operations of local features and global features provided in steps 202 and 203 respectively, and the completion can be performed simultaneously and independently.
  • the flowchart shown in FIG. 2 only adopts a It is expressed in a simple serial execution manner, which does not mean that step 203 must be executed after the execution of step 202 is completed.
  • step 202 may be specifically: using the convolution layer in the preset image format conversion model to extract the local features of the standard dynamic range image, the convolution layer including at least one convolution operation; and step 203 may be specifically: using the global average pooling layer in the preset image format conversion model to extract the global feature of the standard dynamic range image, and the global average pooling layer includes at least one global average pooling layer. pooling operation.
  • Step 204 Convert the standard dynamic range image into a high dynamic range image according to the local features and the global features.
  • step 202 and step 203 the purpose of this step is to comprehensively supplement the missing image details from the standard dynamic range image to the high dynamic range image according to the extracted local features and global features by the above-mentioned executive body, so as to make the conversion The quality of the finished high dynamic range image is better.
  • the embodiment of the present disclosure provides an image format conversion method, which uses a convolution layer to extract local features of a standard dynamic range image, uses a global average
  • the pooling layer is used to extract the global features of the standard dynamic range images. Since the global features of the standard dynamic range images are directly obtained through the independent global average pooling layer, more accurate global features can be extracted, and then based on more accurate supplementation. Picture details required for high dynamic range images, thereby improving the quality of the converted high dynamic range images.
  • FIG. 3 is a flowchart of another image format conversion method provided by an embodiment of the present disclosure, wherein the process 300 includes the following steps:
  • Step 301 Obtain a standard dynamic range image to be converted
  • Step 302 perform a convolution operation on the standard dynamic range image to obtain local features
  • Step 303 Perform at least two global average pooling operations with different sizes on the standard dynamic range images respectively;
  • this embodiment also provides at least two global average pooling operations with different sizes for standard dynamic range images respectively, so as to Taking two sizes as an example, the global pooling operation performed according to the first size will finally represent the pixel features of the entire feature map as a [1,1] matrix; the global pooling operation performed according to the second size will eventually be normal.
  • the pixel feature of the feature map is represented by a [3, 3] matrix, that is, the dimensions of different sizes are used to obtain different degrees of global features.
  • Step 304 perform a non-local operation on the output after the large-size global average pooling operation
  • step 303 the purpose of this step is to perform a non-local operation on the output after the large-sized global average pooling operation by the above-mentioned executive body, and the large-sized average pooling operation means that the size of the global average pooling operation is greater than 1 ⁇ 1.
  • a non-local operation is an operation that is different from a local operation.
  • When performing a 3 ⁇ 3 convolution conv operation with stride 1, for any output position, it can only see a 3 ⁇ 3 neighborhood, that is, its output result only needs to consider this 3 ⁇ 3 the neighborhood.
  • the size of the receptive field of this conv is 3, which is called the local operation.
  • the non-local operation expects that for any output position, its output can take all positions (the entire input) into account.
  • Step 305 fuse local features and global features to obtain fused features
  • Step 306 Use the channel self-attention mechanism to determine the attention of different channels, and weight the fusion features output by the corresponding channels according to the attention of each channel to obtain the weighted features;
  • this step aims to determine the attention of different channels in the neural network by the self-attention mechanism introduced into the channel by the above-mentioned executive body, so as to weight the fusion features output by the corresponding channel according to the attention of each channel, and obtain the weighted post features. That is, by introducing the channel self-attention mechanism, the fusion features of different channel outputs can be better integrated.
  • Step 307 Convert the standard dynamic range image into a high dynamic range image based on the weighted features.
  • this embodiment provides a preferred global feature extraction method through steps 303 to 304, that is, not only does step 303 respectively perform at least two global average pooling operations with different sizes,
  • the non-local operation is additionally performed for the output of the global average pooling operation with a larger size to further optimize the global features;
  • a channel self-attention mechanism is also introduced through steps 305 to 307, so that the output of different channels can be better
  • the fusion features are weighted according to their influence, thereby improving the quality of the final converted high dynamic range image.
  • step 303 can exist independently in the case of step 303, and steps 305 to 307 need not necessarily be executed when steps 303, 304 or a combination of steps 303 and 304 are executed above, and they can be executed completely.
  • steps 303, 304 or a combination of steps 303 and 304 are executed above, and they can be executed completely.
  • the individual embodiments shown in conjunction with flow 200 form different embodiments. This embodiment only exists as a preferred embodiment that includes multiple preferred implementations at the same time.
  • FIG. 4-FIG. 6 Please refer to FIG. 4-FIG. 6 .
  • an SDR image of BT.709 color gamut and 8-bit YUV is converted into an HDR image of BT.2020 color gamut and 10-bit YUV by means of image format conversion model.
  • the leftmost side of Figure 4 is the SDR image to be converted. It can be seen that there are multiple convolution modules for performing convolution operations, and the objects of the convolution operations performed by each convolution module are all performed by the previous convolution module.
  • the result of the convolution operation performed, that is, the convolution model is additive, progressive.
  • the GL-G convolution residual block is an improvement on the standard convolutional residual block in the conventional residual network owned.
  • the internal structure of the GL-G convolution residual block can be seen in the schematic diagram of the structure shown in Figure 5.
  • the core of the structure shown in Figure 5 is a three-branch structure, that is, the input data is branched through the convolution operation of the lowest layer, and They are the global average pooling (GAP) operation branches of size 1 and 3 respectively.
  • GAP global average pooling
  • a non-local operation is added to further optimize the global features
  • the subsequent Expand is to condense the The global features of are extended to the same size as the input data.
  • the output is obtained through the convolution operation and the activation function of Relu.
  • Fig. 4 shows the subsequent processing method of the output of the GL-G convolution residual block, that is, through the GL-G convolution operation, the Relu activation function, the GL-G convolution operation and the SEBlock module in turn.
  • the SEBlock module is the modular representation of the channel self-attention mechanism described above. Since each level will have the channel self-attention module, the module will transmit the determined attention of the current channel to the previous one. layer, which guides the fusion of data between different channels.
  • the model design based on the single-branch network shown in Figure 4 also makes the overall model performance better.
  • the SDR to HDR conversion of 1080p images can be completed within 0.3s, and the single-branch network can support large patchsize (1080P Images can be directly input) training, which is more conducive to the capture and learning of global features.
  • the traditional multi-branch network needs to slice the input image and slice the input (for example, slicing a 1080p image into 36 160*160 images) due to its complexity, which leads to a high time consumption.
  • the present disclosure provides an embodiment of an image format conversion apparatus.
  • the apparatus embodiment corresponds to the method embodiment shown in FIG. 2 .
  • the image format conversion apparatus 700 in this embodiment may include: a standard dynamic range image acquisition unit 701 , a local feature acquisition unit 702 , a global feature acquisition unit 703 , and a high dynamic range image conversion unit 704 .
  • the standard dynamic range image obtaining unit 701 is configured to obtain the standard dynamic range image to be converted;
  • the local feature obtaining unit 702 is configured to perform a convolution operation on the standard dynamic range image to obtain local features;
  • the global feature obtaining unit 703 is configured to perform a global average pooling operation on the standard dynamic range image to obtain global features;
  • the high dynamic range image conversion unit 704 is configured to convert the standard dynamic range image into a high dynamic range image according to local features and global features.
  • the standard dynamic range image acquisition unit 701 the local feature acquisition unit 702, the global feature acquisition unit 703, the specific processing of the high dynamic range image conversion unit 704 and the technologies brought by them
  • the relevant descriptions of steps 201 to 204 in the corresponding embodiment of FIG. 2 which will not be repeated here.
  • the global feature acquisition unit 703 may be further configured to:
  • the image format conversion apparatus 700 may further include:
  • the optimization operation unit is configured to perform a non-local operation on the output after the large-size global average pooling operation; wherein, the large-size average pooling operation means that the size of the global average pooling operation is greater than 1 ⁇ 1.
  • the high dynamic range image conversion unit 704 may be further configured to:
  • the local feature acquisition unit 702 may be further configured to:
  • the global feature acquisition unit 703 may be further configured to:
  • the global average pooling layer in the preset image format conversion model is used to extract the global features of the standard dynamic range image, and the global average pooling layer includes at least one global average pooling operation.
  • the image format conversion apparatus 700 may further include:
  • a video generation unit configured to generate a high dynamic range video from the continuous high dynamic range images.
  • This embodiment exists as an apparatus embodiment corresponding to the above method embodiment.
  • the embodiment of the present disclosure provides an image format conversion apparatus, which uses a convolution layer to extract local features of a standard dynamic range image, uses a global average
  • the pooling layer is used to extract the global features of the standard dynamic range images. Since the global features of the standard dynamic range images are directly obtained through the independent global average pooling layer, more accurate global features can be extracted, and then based on more accurate supplementation. Picture details required for high dynamic range images, thereby improving the quality of the converted high dynamic range images.
  • the present disclosure also provides an electronic device, a readable storage medium, and a computer program product.
  • FIG. 8 shows a schematic block diagram of an example electronic device 800 that may be used to implement embodiments of the present disclosure.
  • Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers.
  • Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices.
  • the electronic device may also be a projection device and a display device including a display that can be used to display images.
  • the components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the disclosure described and/or claimed herein.
  • the device 800 includes a computing unit 801 that can be executed according to a computer program stored in a read only memory (ROM) 802 or a computer program loaded from a storage unit 808 into a random access memory (RAM) 803 Various appropriate actions and handling. In the RAM 803, various programs and data required for the operation of the device 800 can also be stored.
  • the computing unit 801, the ROM 802, and the RAM 803 are connected to each other through a bus 804.
  • An input/output (I/O) interface 805 is also connected to bus 804 .
  • Various components in the device 800 are connected to the I/O interface 805, including: an input unit 806, such as a keyboard, mouse, etc.; an output unit 807, such as various types of displays, speakers, etc.; a storage unit 808, such as a magnetic disk, an optical disk, etc. ; and a communication unit 809, such as a network card, a modem, a wireless communication transceiver, and the like.
  • the communication unit 809 allows the device 800 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.
  • Computing unit 801 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of computing units 801 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various specialized artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc.
  • the computing unit 801 executes the various methods and processes described above, such as an image format conversion method.
  • the image format conversion method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 808 .
  • part or all of the computer program may be loaded and/or installed on device 800 via ROM 802 and/or communication unit 809.
  • the computer program When the computer program is loaded into RAM 803 and executed by computing unit 801, one or more steps of the image format conversion method described above may be performed.
  • the computing unit 801 may be configured to perform the image format conversion method by any other suitable means (eg, by means of firmware).
  • Various implementations of the systems and techniques described herein above may be implemented in digital electronic circuitry, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof.
  • FPGAs field programmable gate arrays
  • ASICs application specific integrated circuits
  • ASSPs application specific standard products
  • SOC systems on chips system
  • CPLD load programmable logic device
  • computer hardware firmware, software, and/or combinations thereof.
  • These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that
  • the processor which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.
  • Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, performs the functions/functions specified in the flowcharts and/or block diagrams. Action is implemented.
  • the program code may execute entirely on the machine, partly on the machine, partly on the machine and partly on a remote machine as a stand-alone software package or entirely on the remote machine or server.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the instruction execution system, apparatus or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer.
  • a display device eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor
  • a keyboard and pointing device eg, a mouse or trackball
  • Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.
  • the systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system.
  • the components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), and the Internet.
  • a computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other.
  • the server can be a cloud server, also known as a cloud computing server or a cloud host. It is a host product in the cloud computing service system to solve the management difficulties in traditional physical host and virtual private server (VPS, Virtual Private Server) services. Large, weak business expansion defects.
  • VPN Virtual Private Server
  • the technical solutions provided by the embodiments of the present disclosure use a convolution layer to extract local features of standard dynamic range images, and use a global average pooling layer to extract standard dynamic range images.
  • the global features of dynamic range images can be extracted directly from the global features of standard dynamic range images through an independent global average pooling layer, so that more accurate global features can be extracted, and then based on more accurate complements to obtain high dynamic range images. image details, thereby improving the quality of the converted high dynamic range image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Software Systems (AREA)
  • Medical Informatics (AREA)
  • Computing Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Biodiversity & Conservation Biology (AREA)
  • Biomedical Technology (AREA)
  • Molecular Biology (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

本公开提供了一种图像格式转换方法、装置、电子设备、计算机可读存储介质及计算机程序产品,涉及计算机视觉和深度学习等人工智能技术领域,可应用于智感超清场景下。该方法的一具体实施方式包括:获取待转换的标准动态范围图像;对标准动态范围图像执行卷积操作,得到局部特征;对标准动态范围图像执行全局平均池化操作,得到全局特征;根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。该实施方式在进行格式转换时,使用了全局平均池化层来直接从标准动态范围图像中提取全局特征,提升了获取到的全局特征的准确性,使得基于此进行转换出的高动态范围图像的质量更佳。

Description

图像格式转换方法、装置、设备、存储介质及程序产品
交叉引用
本专利申请要求于2021年04月07日提交的、申请号为202110372421.7、发明名称为“图像格式转换方法、装置、设备、存储介质及程序产品”的中国专利申请的优先权,该申请的全文以引用的方式并入本申请中。
技术领域
本公开涉及人工智能领域,具体涉及计算机视觉和深度学习技术领域,可应用于智感超清场景下,尤其涉及一种图像格式转换方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
背景技术
随着人们对美好生活品质的追求日益提高,广大民众对于日常观看的媒体内容质量要求也越来越高。硬件设备的同步进展使得高清甚至4K视频进入千千万万老百姓家中。
但目前绝大多数媒体内容仍然只支持以SDR(标准动态范围,Standard Dynamic Range)格式。相较于SDR格式,HDR(高动态范围图像,High-Dynamic Range)格式由于数据存储位数由8bit提升为10bit,颜色空间也从BT709转为BT2020,参数上的提升对视觉观感的提升是巨大且震撼的。
现有技术提供有以下几种将SDR格式的图像转换为HDR的方案:基于多帧不同曝光时间的SDR图像重建HDR图像方案,基于相机响应曲线的SDR图像重建HDR图像方案,以及基于图像分解的SDR图像重建HDR图像方案。
发明内容
本公开实施例提出了一种图像格式转换方法、装置、电子设备、计算机可读存储介质及计算机程序产品。
第一方面,本公开实施例提出了一种图像格式转换方法,包括:获取待转换的标准动态范围图像;对标准动态范围图像执行卷积操作,得到局部特征;对标准动态范围图像执行全局平均池化操作,得到全局特征;根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
第二方面,本公开实施例提出了一种图像格式转换装置,包括:标准动态范围图像获取单元,被配置成获取待转换的标准动态范围图像;局部特征获取单元,被配置成对标准动态范围图像执行卷积操作,得到局部特征;全局特征获取单元,被配置成对标准动态范围图像执行全局平均池化操作,得到全局特征;高动态范围图像转换单元,被配置成根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
第三方面,本公开实施例提供了一种电子设备,该电子设备包括:至少一个处理器;以及与至少一个处理器通信连接的存储器;其中,存储器存储有可被至少一个处理器执行的指令,该指令被至少一个处理器执行,以使至少一个处理器执行时能够实现如第一方面中任一实现方式描述的图像格式转换方法。
第四方面,本公开实施例提供了一种存储有计算机指令的非瞬时计算机可读存储介质,该计算机指令用于使计算机执行时能够实现如第一方面中任一实现方式描述的图像格式转换方法。
第五方面,本公开实施例提供了一种包括计算机程序的计算机程序产品,该计算机程序在被处理器执行时能够实现如第一方面中任一实现方式描述的图像格式转换方法。
本公开实施例提供的图像格式转换方法、装置、电子设备、计算机可读存储介质及计算机程序产品,首先,获取待转换的标准动态范围图像;然后,对标准动态范围图像执行卷积操作,得到局部特征;接着,对标准动态范围图像执行全局平均池化操作,得到全局特征;最后,根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
区别于现有技术将标准动态范围图像转换为高动态范围图像的方 式,本公开使用卷积层提取标准动态范围图像的局部特征、使用全局平均池化层来提取标准动态范围图像的全局特征,由于通过独立的全局平均池化层来直接从标准动态范围图像的全局特征,所以能够提取出更准确的全局特征,进而基于更准确的补充得到高动态范围图像所需的画面细节,进而提升转换出的高动态范围图像的质量。
应当理解,本部分所描述的内容并非旨在标识本公开的实施例的关键或重要特征,也不用于限制本公开的范围。本公开的其它特征将通过以下的说明书而变得容易理解。
附图说明
通过阅读参照以下附图所作的对非限制性实施例所作的详细描述,本公开的其它特征、目的和优点将会变得更明显:
图1是本公开可以应用于其中的示例性系统架构;
图2为本公开实施例提供的一种图像格式转换方法的流程图;
图3为本公开实施例提供的另一种图像格式转换方法的流程图;
图4为本公开实施例提供的一种将标准动态范围图像转换为高动态范围图像的模型流程示意图;
图5为本公开实施例提供的一种GL-GConv Resblock的结构示意图;
图6为本公开实施例提供的一种SEBlock的结构示意图;
图7为本公开实施例提供的一种图像格式转换装置的结构框图;
图8为本公开实施例提供的一种适用于执行图像格式转换方法的电子设备的结构示意图。
具体实施方式
以下结合附图对本公开的示范性实施例做出说明,其中包括本公开实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本公开的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。需要说明 的是,在不冲突的情况下,本公开中的实施例及实施例中的特征可以相互组合。
本公开的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,采取了必要的保密措施,且不违背公序良俗。
图1示出了可以应用本公开的图像格式转换方法、装置、电子设备及计算机可读存储介质的实施例的示例性系统架构100。
如图1所示,系统架构100可以包括终端设备101、102、103,网络104和服务器105。网络104用以在终端设备101、102、103和服务器105之间提供通信链路的介质。网络104可以包括各种连接类型,例如有线、无线通信链路或者光纤电缆等等。
用户可以使用终端设备101、102、103通过网络104与服务器105交互,以接收或发送消息等。终端设备101、102、103和服务器105上可以安装有各种用于实现两者之间进行信息通讯的应用,例如视频点播类应用、图像/视频格式转换类应用、即时通讯类应用等。
终端设备101、102、103和服务器105可以是硬件,也可以是软件。当终端设备101、102、103为硬件时,可以是具有显示屏的各种电子设备,包括但不限于智能手机、平板电脑、膝上型便携计算机和台式计算机,以及同样能够用于显示图像的投影设备和包括显示器在内的显示设备等等;当终端设备101、102、103为软件时,可以安装在上述所列举的电子设备中,其可以实现成多个软件或软件模块,也可以实现成单个软件或软件模块,在此不做具体限定。当服务器105为硬件时,可以实现成多个服务器组成的分布式服务器集群,也可以实现成单个服务器;服务器为软件时,可以实现成多个软件或软件模块,也可以实现成单个软件或软件模块,在此不做具体限定。
服务器105通过内置的各种应用可以提供各种服务,以可以提供将标准动态范围图像批量转换为高动态范围图像的服务的图像格式转换类应用为例,服务器105在运行该图像格式转换类应用时可实现如下效果:首先,通过网络104从终端设备101、102、103中获取待转 换的标准动态范围图像;然后,对标准动态范围图像执行卷积操作,得到局部特征;接下来,对标准动态范围图像执行全局平均池化操作,得到全局特征;最后,根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
需要指出的是,待转换的标准动态范围图像除可以从终端设备101、102、103通过网络104获取到之外,也可以通过各种方式预先存储在服务器105本地。因此,当服务器105检测到本地已经存储有这些数据时(例如开始处理之前留存的待处理图像格式转换任务),可选择直接从本地获取这些数据,在此种情况下,示例性系统架构100也可以不包括终端设备101、102、103和网络104。
由于将标准动态范围图像转换为高动态范围图像需要占用较多的运算资源和较强的运算能力,因此本公开后续各实施例所提供的图像格式转换方法一般由拥有较强运算能力、较多运算资源的服务器105来执行,相应地,图像格式转换装置一般也设置于服务器105中。但同时也需要指出的是,在终端设备101、102、103也具有满足要求的运算能力和运算资源时,终端设备101、102、103也可以通过其上安装的图像格式转换类应用完成上述本交由服务器105做的各项运算,进而输出与服务器105同样的结果。尤其是在同时存在多种具有不同运算能力的终端设备的情况下,但图像格式转换类应用判断所在的终端设备拥有较强的运算能力和剩余较多的运算资源时,可以让终端设备来执行上述运算,从而适当减轻服务器105的运算压力,相应的,图像格式转换装置也可以设置于终端设备101、102、103中。在此种情况下,示例性系统架构100也可以不包括服务器105和网络104。
应该理解,图1中的终端设备、网络和服务器的数目仅仅是示意性的。根据实现需要,可以具有任意数目的终端设备、网络和服务器。
请参考图2,图2为本公开实施例提供的一种图像格式转换方法的流程图,其中流程200包括以下步骤:
步骤201:获取待转换的标准动态范围图像;
本步骤旨在由图像格式转换方法的执行主体(例如图1所示的服务 器105)获取待转换的标准动态范围图像,即获取到处于待转换格式的SDR图像。具体的,SDR图像可以是从SDR视频中经帧抽取技术得到,也可以是直接按照SDR格式独立生成的。
步骤202:对标准动态范围图像执行卷积操作,得到局部特征;
在步骤201的基础上,本步骤旨在由上述执行主体分别从标准动态范围图像中提取出局部特征,局部特征是通过对标准动态范围图像执行卷积操作得到的。
其中,卷积通常有着固定大小的卷积核,例如3×3,以1×1的卷积核为例,卷积操作相当于每次将9个像素点的图像特征做卷积,将其“浓缩”为一个像素点,因此卷积操作通常也被称为下采样,也由于其特性只针对于局部,本公开在本步骤中执行卷积操作来提取局部特征。具体的,为了尽可能的提升提取到的局部特征的准确性,该卷积操作的次数可以为多次,每次可使用不同大小的卷积核。
步骤203:对标准动态范围图像执行全局平均池化操作,得到全局特征;
在步骤201的基础上,本步骤旨在由上述执行主体分别从标准动态范围图像中提取出全局特征,且全局特征是通过对标准动态范围图像执行全局平均池化操作得到的。
全局平均池化是在机器学习算法中出现的一个概念,其英文全称为:Golbal Average Pooling,其常规操作为将特征图所有像素值相加求平局,得到一个数值,即用该数值表示对应特征图,即该数值的得到综合了整张特征图的所有像素点,因此能够尽可能的体现全局特征。
需要说明的是,步骤202和步骤203分别提供的局部特征和全局特征的获取操作之间并不存在因果和依赖关系,完成可以同时、独立的执行,图2所示的流程图只是采用了一种简单的串行执行的方式来表述,并不意味着步骤203一定需要在步骤202执行完成后才执行。
另外,若转换环境为基于机器学习构建出的图像转换模型中时,上述步骤202可具体为:利用预设的图像格式转换模型中的卷积层提取标准动态范围图像的局部特征,卷积层中包括至少一个卷积操作;以及步骤203可具体为:利用预设的图像格式转换模型中的全局平均 池化层提取标准动态范围图像的全局特征,全局平均池化层中包括至少一个全局平均池化操作。
步骤204:根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
在步骤202和步骤203的基础上,本步骤旨在由上述执行主体根据提取出的局部特征和全局特征,来全方位的补充标准动态范围图像至高动态范围图像所缺失的图像细节,进而使得转换完成的高动态范围图像的质量更佳。
区别于现有技术将标准动态范围图像转换为高动态范围图像的方式,本公开实施例提供了一种图像格式转换方法,该方法使用卷积层提取标准动态范围图像的局部特征、使用全局平均池化层来提取标准动态范围图像的全局特征,由于通过独立的全局平均池化层来直接从标准动态范围图像的全局特征,所以能够提取出更准确的全局特征,进而基于更准确的补充得到高动态范围图像所需的画面细节,进而提升转换出的高动态范围图像的质量。
请参考图3,图3为本公开实施例提供的另一种图像格式转换方法的流程图,其中流程300包括以下步骤:
步骤301:获取待转换的标准动态范围图像;
步骤302:对标准动态范围图像执行卷积操作,得到局部特征;
步骤303:对标准动态范围图像分别执行至少两个尺寸不同的全局平均池化操作;
在上一实施例的基础上,为了尽可能的提升提取到的全局特征的有效性,本实施例还给出了对标准动态范围图像分别执行至少两个尺寸不同的全局平均池化操作,以两个尺寸为例,按第一尺寸执行后的全局池化操作最终将整张特征图的像素特征表征为一个[1,1]矩阵;按第二尺寸执行后的全局池化操作最终将正常特征图的像素特征表征为一个[3,3]矩阵,即通过不同大小的尺寸来以得到不同程度的全局特征。
步骤304:对经大尺寸的全局平均池化操作后的输出执行non-local操作;
在步骤303的基础上,本步骤旨在由上述执行主体对经大尺寸的全局平均池化操作后的输出执行non-local操作,大尺寸的平均池化操作指全局平均池化操作的尺寸大于1×1。
non-local操作是区别于local操作的一个操作。当在进行一步stride=1的3×3卷积conv操作时,对于任意一个输出位置,其只能看到一个3×3大小的邻域,也就是它输出的结果只用考虑这个3×3的邻域。这个conv的感受野receptive field的大小为3,这称之为local操作。而non-local操作它则希望对于任意一个输出位置,它的输出结果能将所有位置(整个输入)都考虑在内。
其中,stride是图像处理中常用的概念,stride=每像素占用的字节数(也就是像素位数/8)*Width,若stride不是4的倍数,此时stride=stride+(4-Stride mod 4)。
也就是说通过对尺寸大于1×1的全局平均池化操作的输出执行non-local操作,将能够基于non-local操作的特性,进一步的优化所得到的全局特征。
步骤305:融合局部特征和全局特征,得到融合特征;
步骤306:利用通道自注意力机制确定不同通道的注意力,并根据各通道的注意力对相应通道输出的融合特征进行加权,得到加权后特征;
在步骤305的基础上,本步骤旨在由上述执行主体引入通道的自注意机制确定神经网络中不同通道的注意力,以便根据各通道的注意力对相应通道输出的融合特征进行加权,得到加权后特征。即通过引入通道自注意力机制能够更好的综合不同通道输出的融合特征。
步骤307:基于加权后特征将标准动态范围图像转换为高动态范围图像。
在流程200所示实施例的基础上,本实施例通过步骤303-步骤304提供了一种优选的全局特征提取方式,即不仅通过步骤303分别执行至少两次尺寸不同的全局平均池化操作,还针对尺寸较大的全局平均池化操作的输出额外执行non-local操作,以进一步的优化全局特征;还通过步骤305-步骤307引入了通道自注意力机制,使得能够更好对不同通道 输出的融合特征按照其影响力进行加权,进而提升最终转换出的高动态范围图像的质量。
应当理解的是,步骤303可以在连带步骤303的情况下单独存在,步骤305-步骤307也无需一定在上面执行了步骤303、步骤304或步骤303与步骤304的组合的情况下执行,完全可以单独的结合流程200所示的实施例形成不同的实施例。本实施例仅作为一个同时包含多种优选实现方式的优选实施例存在。
为加深理解,本公开还结合一个具体应用场景,给出了一种具体的实现方案,请参见如图4-图6。
本实施例具体通过图像格式转换模型的方式将BT.709色域、8bit YUV的SDR图像转换为BT.2020色域、10bit YUV的HDR图像。
该图像格式转换模型的结构图4所示:
图4最左侧为待转换的SDR图像,可以看出存在多个用于执行卷积操作的卷积模块,且各卷积模块所执行的卷积操作的对象均为上一卷积模块所执行的卷积操作的结果,即卷积模型是叠加、递进式的。经过每层的卷积模块执行卷积操作的结果会经过本公开自行构建的GL-GConv Resblock模块(可简称为GL-G卷积残差块,其中的GL-G的英文全称为:Global-Local Gated,意在突出该卷积残差块着重突出对全局特征的提取和处理),该GL-G卷积残差块是在常规残差网络中的标准卷积残差块的基础上改进得到的。
经GL-G卷积残差块处理后可以得到局部特征和全局特征,并通过上采样模块不断的汇聚最终用于生成HDR图像。
具体的,GL-G卷积残差块的内部结构可见图5所示的结构示意图,图5所示结构的核心为三分支结构,即输入的数据分别经最下层的卷积操作分支,和分别为尺寸为1和3的全局平均池化(GAP)操作分支,其中在尺寸为3的全局平均池化操作后还增设了non-local操作来进一步优化全局特征,后面的Expand则是将浓缩的全局特征拓展至输入数据一样的大小。最终经卷积操作和Relu的激活函数得到输出。
此外,图4最下方示出了GL-G卷积残差块的输出的后续处理方 式,即依次经过GL-G卷积操作、Relu激活函数、GL-G卷积操作以及SEBlock模块。该SEBlock模块即为上文所描述的通道自注意力机制的模块化表现,由于每个层次都会有该通道自注意力模块,该模块将确定出的当前通道的注意力一并传入上一层,从而指导不同通道之间数据的融合。
SEBlock模块的具体结构可参见图6所示的结构示意图,其中,Global pooling指全局池化操作、FC(Fully Connected layer)为全连接层、Relu和Sigmold分别为两个不同的激活函数,其中Relu在适用于浅层的神经网络,Sigmold则适用与深层的神经网络。
同时,图4所示的基于单分支网络的模型设计还使得整体模型性能更好,经测试可以在0.3s内完成1080p图像的SDR到HDR的转换,且单分支网络可以支持大patchsize(1080P的图像可直接输入)的训练,更加有利于全局特征的捕获和学习。而传统的多分支的网络由于过于复杂,需要对输入图像做切片,分片输入(比如将1080p的图像切为36个160*160的图像),导致用时过高。
进一步参考图7,作为对上述各图所示方法的实现,本公开提供了一种图像格式转换装置的一个实施例,该装置实施例与图2所示的方法实施例相对应,该装置具体可以应用于各种电子设备中。
如图7所示,本实施例的图像格式转换装置700可以包括:标准动态范围图像获取单元701、局部特征获取单元702、全局特征获取单元703、高动态范围图像转换单元704。其中,标准动态范围图像获取单元701,被配置成获取待转换的标准动态范围图像;局部特征获取单元702,被配置成对标准动态范围图像执行卷积操作,得到局部特征;全局特征获取单元703,被配置成对标准动态范围图像执行全局平均池化操作,得到全局特征;高动态范围图像转换单元704,被配置成根据局部特征和全局特征将标准动态范围图像转换为高动态范围图像。
在本实施例中,图像格式转换装置700中:标准动态范围图像获取单元701、局部特征获取单元702、全局特征获取单元703、高动态范围图像转换单元704的具体处理及其所带来的技术效果可分别参考图2 对应实施例中的步骤201-204的相关说明,在此不再赘述。
在本实施例的一些可选的实现方式中,全局特征获取单元703可以被进一步配置成:
对标准动态范围图像分别执行至少两个尺寸不同的全局平均池化操作。
在本实施例的一些可选的实现方式中,图像格式转换装置700中还可以包括:
优化操作单元,被配置成对经大尺寸的全局平均池化操作后的输出执行non-local操作;其中,大尺寸的平均池化操作指全局平均池化操作的尺寸大于1×1。
在本实施例的一些可选的实现方式中,高动态范围图像转换单元704可以被进一步配置成:
融合局部特征和全局特征,得到融合特征;
利用通道自注意力机制确定不同通道的注意力,并基于注意力对各通道输出的融合特征进行加权,得到加权后特征;
基于加权后特征将标准动态范围图像转换为高动态范围图像。
在本实施例的一些可选的实现方式中,局部特征获取单元702可以被进一步配置成:
利用预设的图像格式转换模型中的卷积层提取标准动态范围图像的局部特征,卷积层中包括至少一个卷积操作;以及
全局特征获取单元703可以被进一步配置成:
利用预设的图像格式转换模型中的全局平均池化层提取标准动态范围图像的全局特征,全局平均池化层中包括至少一个全局平均池化操作。
在本实施例的一些可选的实现方式中,当标准动态范围图像提取自标准动态范围视频时,图像格式转换装置700还可以包括:
视频生成单元,被配置成根据连续的高动态范围图像生成高动态范围视频。
本实施例作为对应于上述方法实施例的装置实施例存在。
区别于现有技术将标准动态范围图像转换为高动态范围图像的方式,本公开实施例提供了一种图像格式转换装置,该装置使用卷积层 提取标准动态范围图像的局部特征、使用全局平均池化层来提取标准动态范围图像的全局特征,由于通过独立的全局平均池化层来直接从标准动态范围图像的全局特征,所以能够提取出更准确的全局特征,进而基于更准确的补充得到高动态范围图像所需的画面细节,进而提升转换出的高动态范围图像的质量。
根据本公开的实施例,本公开还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。
图8示出了可以用来实施本公开的实施例的示例电子设备800的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。电子设备还可以为能够用于显示图像的投影设备和包括显示器在内的显示设备。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本公开的实现。
如图8所示,设备800包括计算单元801,其可以根据存储在只读存储器(ROM)802中的计算机程序或者从存储单元808加载到随机访问存储器(RAM)803中的计算机程序,来执行各种适当的动作和处理。在RAM 803中,还可存储设备800操作所需的各种程序和数据。计算单元801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(I/O)接口805也连接至总线804。
设备800中的多个部件连接至I/O接口805,包括:输入单元806,例如键盘、鼠标等;输出单元807,例如各种类型的显示器、扬声器等;存储单元808,例如磁盘、光盘等;以及通信单元809,例如网卡、调制解调器、无线通信收发机等。通信单元809允许设备800通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。
计算单元801可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元801的一些示例包括但不限于中央处理单元(CPU)、 图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元801执行上文所描述的各个方法和处理,例如图像格式转换方法。例如,在一些实施例中,图像格式转换方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元808。在一些实施例中,计算机程序的部分或者全部可以经由ROM 802和/或通信单元809而被载入和/或安装到设备800上。当计算机程序加载到RAM 803并由计算单元801执行时,可以执行上文描述的图像格式转换方法的一个或多个步骤。备选地,在其他实施例中,计算单元801可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行图像格式转换方法。
本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。
用于实施本公开的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、 装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。
可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)和互联网。
计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,又称为云计算服务器或云主机,是云计算服务体系中的一项主机产品,以解决传统物理主机与虚拟专用 服务器(VPS,Virtual Private Server)服务中存在的管理难度大,业务扩展性弱的缺陷。
区别于现有技术将标准动态范围图像转换为高动态范围图像的方式,本公开实施例所提供的技术方案使用卷积层提取标准动态范围图像的局部特征、使用全局平均池化层来提取标准动态范围图像的全局特征,由于通过独立的全局平均池化层来直接从标准动态范围图像的全局特征,所以能够提取出更准确的全局特征,进而基于更准确的补充得到高动态范围图像所需的画面细节,进而提升转换出的高动态范围图像的质量。
应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发公开中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本公开公开的技术方案所期望的结果,本文在此不进行限制。
上述具体实施方式,并不构成对本公开保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本公开的精神和原则之内所作的修改、等同替换和改进等,均应包含在本公开保护范围之内。

Claims (15)

  1. 一种图像格式转换方法,包括:
    获取待转换的标准动态范围图像;
    对所述标准动态范围图像执行卷积操作,得到局部特征;
    对所述标准动态范围图像执行全局平均池化操作,得到全局特征;
    根据所述局部特征和所述全局特征将所述标准动态范围图像转换为高动态范围图像。
  2. 根据权利要求1所述的方法,其中,所述对所述标准动态范围图像执行全局平均池化操作,包括:
    对所述标准动态范围图像分别执行至少两个尺寸不同的全局平均池化操作。
  3. 根据权利要求2所述的方法,还包括:
    对经大尺寸的全局平均池化操作后的输出执行non-local操作;其中,所述大尺寸的平均池化操作指全局平均池化操作的尺寸大于1×1。
  4. 根据权利要求1-3任一项所述的方法,其中,所述根据所述局部特征和所述全局特征将所述标准动态范围图像转换为高动态范围图像,包括:
    融合所述局部特征和所述全局特征,得到融合特征;
    利用通道自注意力机制确定不同通道的注意力,并根据各所述通道的注意力对相应通道输出的融合特征进行加权,得到加权后特征;
    基于所述加权后特征将所述标准动态范围图像转换为高动态范围图像。
  5. 根据权利要求1-4任一项所述的方法,其中,所述对所述标准动态范围图像执行卷积操作,得到局部特征,包括:
    利用预设的图像格式转换模型中的卷积层提取所述标准动态范围图 像的局部特征,所述卷积层中包括至少一个卷积操作;以及
    所述对所述标准动态范围图像执行全局平均池化操作,得到全局特征,包括:
    利用预设的图像格式转换模型中的全局平均池化层提取所述标准动态范围图像的全局特征,所述全局平均池化层中包括至少一个全局平均池化操作。
  6. 根据权利要求1-5任一项所述的方法,当所述标准动态范围图像提取自标准动态范围视频时,还包括:
    根据连续的高动态范围图像生成高动态范围视频。
  7. 一种图像格式转换装置,包括:
    标准动态范围图像获取单元,被配置成获取待转换的标准动态范围图像;
    局部特征获取单元,被配置成对所述标准动态范围图像执行卷积操作,得到局部特征;
    全局特征获取单元,被配置成对所述标准动态范围图像执行全局平均池化操作,得到全局特征;
    高动态范围图像转换单元,被配置成根据所述局部特征和所述全局特征将所述标准动态范围图像转换为高动态范围图像。
  8. 根据权利要求7所述的装置,其中,所述全局特征获取单元被进一步配置成:
    对所述标准动态范围图像分别执行至少两个尺寸不同的全局平均池化操作。
  9. 根据权利要求8所述的装置,还包括:
    优化操作单元,被配置成对经大尺寸的全局平均池化操作后的输出执行non-local操作;其中,所述大尺寸的平均池化操作指全局平均池化操作的尺寸大于1×1。
  10. 根据权利要求7-9任一项所述的装置,其中,所述高动态范围图像转换单元被进一步配置成:
    融合所述局部特征和所述全局特征,得到融合特征;
    利用通道自注意力机制确定不同通道的注意力,并基于所述注意力对各所述通道输出的融合特征进行加权,得到加权后特征;
    基于所述加权后特征将所述标准动态范围图像转换为高动态范围图像。
  11. 根据权利要求7-10任一项所述的装置,其中,所述局部特征获取单元被进一步配置成:
    利用预设的图像格式转换模型中的卷积层提取所述标准动态范围图像的局部特征,所述卷积层中包括至少一个卷积操作;以及
    所述全局特征获取单元被进一步配置成:
    利用预设的图像格式转换模型中的全局平均池化层提取所述标准动态范围图像的全局特征,所述全局平均池化层中包括至少一个全局平均池化操作。
  12. 根据权利要求7-11任一项所述的装置,当所述标准动态范围图像提取自标准动态范围视频时,还包括:
    视频生成单元,被配置成根据连续的高动态范围图像生成高动态范围视频。
  13. 一种电子设备,包括:
    至少一个处理器;以及
    与所述至少一个处理器通信连接的存储器;其中,
    所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-6中任一项所述的图像格式转换方法。
  14. 一种存储有计算机指令的非瞬时计算机可读存储介质,所述计算机指令用于使所述计算机执行权利要求1-6中任一项所述的图像格式转换方法。
  15. 一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-6中任一项所述的图像格式转换方法。
PCT/CN2022/075034 2021-04-07 2022-01-29 图像格式转换方法、装置、设备、存储介质及程序产品 WO2022213716A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022555980A JP2023524624A (ja) 2021-04-07 2022-01-29 画像フォーマットを変換する方法、装置、電子機器、記憶媒体およびプログラム
US17/939,401 US20230011823A1 (en) 2021-04-07 2022-09-07 Method for converting image format, device, and storage medium

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110372421.7A CN113487524B (zh) 2021-04-07 2021-04-07 图像格式转换方法、装置、设备、存储介质及程序产品
CN202110372421.7 2021-04-07

Related Child Applications (1)

Application Number Title Priority Date Filing Date
US17/939,401 Continuation US20230011823A1 (en) 2021-04-07 2022-09-07 Method for converting image format, device, and storage medium

Publications (1)

Publication Number Publication Date
WO2022213716A1 true WO2022213716A1 (zh) 2022-10-13

Family

ID=77932680

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/075034 WO2022213716A1 (zh) 2021-04-07 2022-01-29 图像格式转换方法、装置、设备、存储介质及程序产品

Country Status (4)

Country Link
US (1) US20230011823A1 (zh)
JP (1) JP2023524624A (zh)
CN (1) CN113487524B (zh)
WO (1) WO2022213716A1 (zh)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487524B (zh) * 2021-04-07 2023-05-12 北京百度网讯科技有限公司 图像格式转换方法、装置、设备、存储介质及程序产品
CN114358136B (zh) * 2021-12-10 2024-05-17 鹏城实验室 一种图像数据处理方法、装置、智能终端及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066346A1 (en) * 2017-08-30 2019-02-28 Korea Advanced Institute Of Science And Technology Apparatus and method for reconstructing image using extended neural network
CN111683269A (zh) * 2020-06-12 2020-09-18 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备和存储介质
CN111709900A (zh) * 2019-10-21 2020-09-25 上海大学 一种基于全局特征指导的高动态范围图像重建方法
CN112257759A (zh) * 2020-09-27 2021-01-22 华为技术有限公司 一种图像处理的方法以及装置
CN113487524A (zh) * 2021-04-07 2021-10-08 北京百度网讯科技有限公司 图像格式转换方法、装置、设备、存储介质及程序产品

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109101975B (zh) * 2018-08-20 2022-01-25 电子科技大学 基于全卷积神经网络的图像语义分割方法
CN111814633B (zh) * 2020-06-29 2023-06-27 北京百度网讯科技有限公司 陈列场景检测方法、装置、设备以及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20190066346A1 (en) * 2017-08-30 2019-02-28 Korea Advanced Institute Of Science And Technology Apparatus and method for reconstructing image using extended neural network
CN111709900A (zh) * 2019-10-21 2020-09-25 上海大学 一种基于全局特征指导的高动态范围图像重建方法
CN111683269A (zh) * 2020-06-12 2020-09-18 腾讯科技(深圳)有限公司 视频处理方法、装置、计算机设备和存储介质
CN112257759A (zh) * 2020-09-27 2021-01-22 华为技术有限公司 一种图像处理的方法以及装置
CN113487524A (zh) * 2021-04-07 2021-10-08 北京百度网讯科技有限公司 图像格式转换方法、装置、设备、存储介质及程序产品

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIAN JUNJIE; WANG YONGFANG; WANG CHUANG: "Dual-Streams Global Guided Learning for High Dynamic Range Image Reconstruction", 2019 IEEE VISUAL COMMUNICATIONS AND IMAGE PROCESSING (VCIP), IEEE, 1 December 2019 (2019-12-01), pages 1 - 4, XP033693853, DOI: 10.1109/VCIP47243.2019.8965798 *

Also Published As

Publication number Publication date
CN113487524B (zh) 2023-05-12
US20230011823A1 (en) 2023-01-12
CN113487524A (zh) 2021-10-08
JP2023524624A (ja) 2023-06-13

Similar Documents

Publication Publication Date Title
WO2022213716A1 (zh) 图像格式转换方法、装置、设备、存储介质及程序产品
US20210209459A1 (en) Processing method and system for convolutional neural network, and storage medium
CN111182254B (zh) 一种视频处理方法、装置、设备及存储介质
US20220207299A1 (en) Method and apparatus for building image enhancement model and for image enhancement
US20200167896A1 (en) Image processing method and device, display device and virtual reality display system
US11627281B2 (en) Method and apparatus for video frame interpolation, and device and storage medium
EP3876197A2 (en) Portrait extracting method and apparatus, electronic device and storage medium
US11983849B2 (en) Image filling method and apparatus, device, and storage medium
CN113453073B (zh) 一种图像渲染方法、装置、电子设备及存储介质
WO2023045317A1 (zh) 表情驱动方法、装置、电子设备及存储介质
CN113365146B (zh) 用于处理视频的方法、装置、设备、介质和产品
CN112714357B (zh) 视频播放方法、视频播放装置、电子设备和存储介质
EP4135333A1 (en) Image display method and apparatus, electronic device, and medium
EP4120181A2 (en) Method and apparatus of fusing image, and method of training image fusion model
US20230005171A1 (en) Visual positioning method, related apparatus and computer program product
US20220308816A1 (en) Method and apparatus for augmenting reality, device and storage medium
US11481927B2 (en) Method and apparatus for determining text color
CN113888560A (zh) 用于处理图像的方法、装置、设备以及存储介质
WO2023179385A1 (zh) 一种视频超分方法、装置、设备及存储介质
US20230232116A1 (en) Video conversion method, electronic device, and non-transitory computer readable storage medium
CN116823610A (zh) 一种基于深度学习的水下图像超分辨率生成方法和系统
CN113240780B (zh) 生成动画的方法和装置
CN112991209B (zh) 图像处理方法、装置、电子设备及存储介质
CN113038184B (zh) 数据处理方法、装置、设备及存储介质
CN114782249A (zh) 一种图像的超分辨率重建方法、装置、设备以及存储介质

Legal Events

Date Code Title Description
ENP Entry into the national phase

Ref document number: 2022555980

Country of ref document: JP

Kind code of ref document: A

121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22783787

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22783787

Country of ref document: EP

Kind code of ref document: A1