WO2023093481A1 - Procédé et appareil de traitement des images à super-résolution basé sur un domaine de fourier, dispositif et support - Google Patents

Procédé et appareil de traitement des images à super-résolution basé sur un domaine de fourier, dispositif et support Download PDF

Info

Publication number
WO2023093481A1
WO2023093481A1 PCT/CN2022/129310 CN2022129310W WO2023093481A1 WO 2023093481 A1 WO2023093481 A1 WO 2023093481A1 CN 2022129310 W CN2022129310 W CN 2022129310W WO 2023093481 A1 WO2023093481 A1 WO 2023093481A1
Authority
WO
WIPO (PCT)
Prior art keywords
sample image
loss function
frequency information
image
resolution
Prior art date
Application number
PCT/CN2022/129310
Other languages
English (en)
Chinese (zh)
Inventor
董航
孔方圆
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2023093481A1 publication Critical patent/WO2023093481A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • G06N3/084Backpropagation, e.g. using gradient descent
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4046Scaling of whole images or parts thereof, e.g. expanding or contracting using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/10Image enhancement or restoration using non-spatial domain filtering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20048Transform domain processing
    • G06T2207/20056Discrete and fast Fourier transform, [DFT, FFT]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • the present disclosure relates to the technical field of image processing, and in particular to a Fourier domain-based super-resolution image processing method, device, equipment, and medium.
  • Image super-resolution processing is to enlarge the resolution of the image, and obtain a high-resolution super-resolution image from a low-resolution image, which is often used for image quality enhancement in short video frames and other scenes.
  • a super-resolution network is used to process the input low-resolution image to output a high-resolution super-resolution image. Due to the limitation of computing resources on the cloud and the device side, the super-resolution network usually has a small number of parameters. A lightweight network that restores a higher-resolution super-resolution image for an input image with only low-frequency information.
  • the present disclosure provides a Fourier domain-based super-resolution image processing method, device, device, and medium to solve the problem of insufficient high-frequency prior information of the input sample image for lightweight super-resolution network learning in the related art, resulting in generated
  • the super-resolution image is too smooth and lacks technical problems of details.
  • An embodiment of the present disclosure provides a Fourier domain-based super-resolution image processing method, the method comprising:
  • the output image perform Fourier transform processing on the positive sample image and the reference sample image respectively, and obtain the first high-frequency information corresponding to the positive sample image in the Fourier domain, and the first high-frequency information corresponding to the positive sample image, and the Referring to the second high-frequency information corresponding to the sample image; determining a first loss function according to the first high-frequency information and the second high-frequency information; performing backpropagation training of the neural network according to the first loss function
  • the parameter acquisition target super-resolution network is used to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • An embodiment of the present disclosure also provides a Fourier domain-based super-resolution image processing device, the device includes: a first acquisition module, configured to acquire a positive sample image and a reference sample image, wherein the positive sample image is The true value super-resolution image corresponding to the input sample image, the reference sample image is an image output after the input sample image is processed by the neural network to be trained to reduce the image quality; the second acquisition module is used for the positive sample image performing Fourier transform processing on the reference sample image separately, and obtaining first high-frequency information corresponding to the positive sample image in the Fourier domain, and second high-frequency information corresponding to the reference sample image; A determination module, configured to determine a first loss function according to the first high-frequency information and the second high-frequency information; a third acquisition module, configured to perform backpropagation training on the neural network according to the first loss function parameters, to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • An embodiment of the present disclosure also provides an electronic device, which includes: a processor; a memory for storing instructions executable by the processor; and the processor, for reading the instruction from the memory.
  • the instructions can be executed, and the instructions are executed to implement the Fourier domain-based super-resolution image processing method provided by the embodiment of the present disclosure.
  • the embodiment of the present disclosure also provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute the Fourier domain-based super-resolution image processing method provided by the embodiment of the present disclosure.
  • FIG. 1 is a schematic flow diagram of a Fourier domain-based super-resolution image processing method provided by an embodiment of the present disclosure
  • Fig. 2 is a schematic diagram of a Fourier domain-based super-resolution image processing scene provided by an embodiment of the present disclosure
  • FIG. 3 is a schematic flowchart of another Fourier domain-based super-resolution image processing method provided by an embodiment of the present disclosure
  • FIG. 4 is a schematic diagram of another Fourier domain-based super-resolution image processing scenario provided by an embodiment of the present disclosure
  • FIG. 5 is a schematic flowchart of another Fourier domain-based super-resolution image processing method provided by an embodiment of the present disclosure
  • FIG. 6 is a schematic diagram of an acquisition scene of a negative sample image provided by an embodiment of the present disclosure.
  • FIG. 7 is a schematic diagram of an acquisition scene of a negative sample image provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic flowchart of another Fourier domain-based super-resolution image processing method provided by an embodiment of the present disclosure.
  • FIG. 9 is a schematic diagram of another Fourier domain-based super-resolution image processing scenario provided by an embodiment of the present disclosure.
  • FIG. 10 is a schematic diagram of another Fourier domain-based super-resolution image processing scenario provided by an embodiment of the present disclosure.
  • FIG. 11 is a schematic structural diagram of a Fourier domain-based super-resolution image processing device provided by an embodiment of the present disclosure
  • Fig. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the term “comprise” and its variations are open-ended, ie “including but not limited to”.
  • the term “based on” is “based at least in part on”.
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one further embodiment”; the term “some embodiments” means “at least some embodiments.” Relevant definitions of other terms will be given in the description below.
  • the prior information learning ability of the lightweight network is insufficient.
  • the image information learned based on the input image with only low-frequency information is concentrated in the low-frequency prior information, but the details of the image are concentrated in the high-frequency prior information. Too smooth and not enough detail.
  • an embodiment of the present disclosure provides a Fourier domain-based super-resolution image processing method, in which a training framework more suitable for lightweight networks is implemented to improve the detail generation ability of small networks
  • the detail generation ability of the network is improved by introducing a generative adversarial network (GAN, Generative Adversarial Networks) or a contrastive learning loss function.
  • GAN Generative Adversarial Networks
  • the network will learn a lot of color-related prior information in addition to the high-frequency prior information, which leads to a high requirement for the parameters of the super-resolution network in this training framework, which is not suitable for Used to train lightweight networks.
  • Fig. 1 is a schematic flowchart of a Fourier domain-based super-resolution image processing method provided by an embodiment of the present disclosure.
  • the method can be executed by a Fourier-domain-based super-resolution image processing device, wherein the device can use software and /or hardware implementation, generally can be integrated in electronic equipment.
  • the method includes:
  • Step 101 acquire a positive sample image and a reference sample image, wherein the positive sample image is the true value super-resolution image corresponding to the input sample image, and the reference sample image is an output image after the input sample image has been processed by the neural network to be trained to reduce the image quality .
  • the true value super-resolution image corresponding to the input sample image is obtained as the positive sample image, and the input sample image is obtained after being processed by the neural network to be trained to reduce the image quality
  • the output image is used as a reference sample image to ensure that in the subsequent training process, the distance from the positive sample image is considered, so that the output super-resolution image is closer to the positive sample image, and the neural network can be an initial super-resolution network with lower accuracy and other arbitrary networks, when the neural network can be an initial super-resolution network with low precision, the corresponding reference sample image is an enlarged image processed by the initial super-resolution network with low precision.
  • Step 102 perform Fourier transform processing on the positive sample image and the reference sample image respectively, and obtain the first high-frequency information corresponding to the positive sample image in the Fourier domain, and the second high-frequency information corresponding to the reference sample image .
  • Fourier transform processing is performed on the positive sample image and the reference sample image respectively, and the first high-frequency information corresponding to the positive sample image in the Fourier domain and the second high-frequency information corresponding to the reference sample image are obtained.
  • High-frequency information so as to further conduct super-resolution network learning and training based on high-frequency information, ensure that the trained high-frequency network can learn high-frequency prior information of input sample images, and ensure that the output super-resolution images are rich in details.
  • Step 103 Determine a first loss function according to the first high-frequency information and the second high-frequency information.
  • the first loss function is determined according to the first high-frequency information and the second high-frequency information.
  • the method of calculating the first loss function can be any loss calculation method.
  • the L1 loss function representing the mean absolute error is determined according to the first high-frequency information and the second high-frequency information, that is, the calculation of the first The average value of the distance between the high-frequency information and the second high-frequency information is used as the L1 loss function.
  • the L2 loss function representing the mean square error (Mean Square Error, MSE) is determined according to the first high-frequency information and the second high-frequency information, that is, the first high-frequency information and the second high-frequency information are calculated The average of the squared differences between messages is used as the L2 loss function.
  • Step 104 perform backpropagation on the parameters of the training neural network to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • the parameters of the neural network are backpropagated according to the first loss function, so that the target exceeds
  • the training process of the sub-network can effectively help the lightweight super-resolution network to focus more on learning high-frequency prior information in the training set, so that the lightweight network can align with the large-model super-resolution network in terms of detail generation capabilities, ensuring output The detail richness of the target super-resolution image.
  • the network parameters of the neural network are adjusted until the super-resolution network corresponding to the adjusted network parameters If the value of the first loss function is less than the preset threshold, the training of the neural network is completed.
  • the first high-frequency information is F +
  • the second high-frequency information is F
  • the first loss function is L1(F, F + )
  • the input sample The image is LR
  • the positive sample image is GT
  • the reference sample image is SR.
  • the result (SR) of the input image LR after being enlarged by the lightweight super-resolution network (G) is first compared with the positive sample image (GT) Calculate L1loss to ensure the accuracy of the super-resolution result in the RGB domain.
  • SR is used as a reference sample
  • GT is used as a positive sample image
  • fast Fourier transform is performed respectively, so as to obtain the high-frequency information F + and F of the two sets of images in the Fourier domain.
  • the discriminant model of the GAN network in order to make the target super-resolution image output by the trained target super-resolution network closer to the positive sample image, can also be combined with the first feature corresponding to the first high-frequency information and Adversarial training is performed on the second feature corresponding to the second high-frequency information.
  • the method also includes:
  • Step 301 extracting first features corresponding to first high-frequency information and second features corresponding to second high-frequency information through a discriminant model of a generative adversarial network GAN.
  • the GAN network may include a series of fully-connected layers. After inputting the first high-frequency information and the second high-frequency information into the GAN network discriminant model for feature extraction, respectively obtain the first feature corresponding to the first high-frequency information , and the second feature corresponding to the second high-frequency information.
  • Step 302 performing discrimination processing on the first feature and the second feature respectively, and obtaining the first score corresponding to the positive sample image and the second score corresponding to the reference sample image.
  • the first feature and the second feature are respectively discriminated according to the discriminant model, and the first score corresponding to the positive sample image and the second score corresponding to the reference sample image are obtained. Fraction.
  • Step 303 determining a binary cross entropy BCE loss function according to the first score and the second score.
  • the first score and the second score are subjected to adversarial training through the Binary Cross Entropy Loss function (Binary Cross Entropy Loss, BCE) for binary classification, so as to ensure the high-frequency results of super-scored results and positive sample images even closer.
  • BCE Binary Cross Entropy Loss
  • Step 304 Perform backpropagation to train parameters of the neural network according to the BCE loss function and the first loss function to obtain the target super-resolution network.
  • the parameters of the neural network are backpropagated according to the BCE loss function and the first loss function, that is, the network parameters of the neural network are adjusted according to the loss value of the BCE loss function and the first loss function until the BCE loss function
  • the loss value is less than the preset loss threshold, and the loss value of the first loss function is also less than the corresponding loss threshold, so as to obtain the target super-resolution network after training.
  • the reference sample image and the positive sample are close at the level of high-frequency information, and based on the adversarial training, the closeness between the reference sample image and the positive sample at the feature level is further strengthened.
  • the first high-frequency information is F +
  • the second high-frequency information is F
  • the first score is D(F + )
  • the second score is D(F )
  • the input sample image is LR
  • the positive sample image is GT
  • the reference sample image is For SR, refer to Figure 4.
  • the result (SR) of the input image LR enlarged by the lightweight super-resolution network (G) first calculates the L1loss from the positive sample image (GT) to ensure the accuracy of the super-resolution result in the RGB domain. sex.
  • the neural network is trained according to the BCE loss function and the first loss function to obtain the target super-scoring network.
  • the neural network is trained based on two loss functions to ensure that the super-resolution result (target super-resolution image) and the positive sample image are further consistent in high-frequency information.
  • the Fourier domain-based super-resolution image processing method obtains the positive sample image and the reference sample image corresponding to the input sample image, wherein the positive sample image is the true value super-resolution image corresponding to the input sample image.
  • the reference sample image is the output image after the input sample image has been processed by the neural network to be trained to reduce the image quality.
  • the positive sample image and the reference sample image are respectively subjected to Fourier transform processing, and obtained in the Fourier domain.
  • the first high-frequency information corresponding to the sample image, and the second high-frequency information corresponding to the reference sample image determine the first loss function according to the first high-frequency information and the second high-frequency information, and then perform an inverse Propagate the parameters of the training neural network to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • the training of the super-resolution network based on the loss function in the Fourier domain is realized, so that the trained super-resolution network learns high-frequency prior information, and the detail richness of the super-resolution image generated by the lightweight network is guaranteed.
  • the method further includes:
  • Step 501 acquire a negative sample image, wherein the negative sample image is an image obtained by fusion and noise-adding processing of an input sample image and a positive sample image.
  • the input sample image is upsampled to obtain a candidate sample image with the same size as the positive sample image, Furthermore, a negative sample image is generated according to the candidate sample image and the positive sample image, thereby, the positive sample image is fused to generate a negative sample image, so that the negative sample image is slightly close to the positive sample image, thereby increasing the difficulty of training and improving excessively fast convergence.
  • the first weight corresponding to the candidate sample image can be determined, for example, the first weight can be 0.5, etc., and the second weight corresponding to the positive sample image can be determined to be 0.5, etc., where , the sum of the first weight and the second weight is 1.
  • the positive sample image is down-sampled based on a preset down-sampling resolution to obtain a down-sampled sample image, and the size of the down-sampled image is the same as that of the input sample image.
  • Step 502 Perform Fourier transform processing on the negative sample image to obtain third high-frequency information corresponding to the negative sample image in the Fourier domain.
  • the extraction of high-frequency information of the negative sample image is realized based on the Fourier domain, that is, the negative sample image is subjected to Fourier transform processing to obtain the third highest frequency information corresponding to the negative sample image in the Fourier domain. frequency information.
  • Step 503 determine the contrastive learning loss function according to the first high-frequency information, the second high-frequency information and the third high-frequency information, wherein the contrastive learning loss function is used to make the high-frequency information of the reference sample image close to the high-frequency of the positive sample image information, and stay away from the high-frequency information of negative sample images.
  • the contrastive learning loss function is determined according to the first high-frequency information, the second high-frequency information and the third high-frequency information, wherein the contrastive learning loss function is used to make the reference sample image
  • the high-frequency information of is close to the high-frequency information of the positive sample image, and far away from the high-frequency information of the negative sample image. That is, the reference sample image and the positive sample image are close to the high-frequency information level while being far away from the negative sample image, thereby reducing the introduction of some artifacts and noise.
  • the training method in this embodiment does not need to introduce a large number of fake sample images for generative adversarial learning. It is only based on the calculation of the loss value of positive and negative samples in the high-frequency information dimension to perform super-resolution network training.
  • GAN Generative Adversarial Networks
  • the GAN network is easy to introduce artifacts and noise because the anti-loss function it uses only emphasizes that the output of the network is close to the true value of the training set (positive sample image), but does not consider its relationship with the negative The distance between the sample images, thereby introducing artifacts and noise.
  • not only the output of the network is close to the true value (positive sample image), but also distanced from some flawed negative samples, reducing the introduced artifacts and noise.
  • the method of determining the contrastive learning loss function according to the first high-frequency information, the second high-frequency information and the third high-frequency information is different. Examples are as follows:
  • determining a contrastive learning loss function according to the first high-frequency information, the second high-frequency information, and the third high-frequency information includes:
  • Step 801 determine a second loss function according to the first high-frequency information and the second high-frequency information.
  • the second loss function is determined according to the first high-frequency information and the second high-frequency information, where the second loss function represents the distance between the reference sample image and the positive sample image.
  • the calculation method of the second loss function can be obtained based on any algorithm for calculating the loss value.
  • it can be calculated based on the L1 loss function.
  • the L1 loss function is the mean absolute error (Mean Absolute Error, MAE), which is used to calculate the first high frequency the average of the distances between the message and the second high-frequency message;
  • the L2 loss function is the mean square error (Mean Square Error, MSE), which is used to calculate the average value of the square of the difference between the first high-frequency information and the second high-frequency information.
  • MSE mean square Error
  • Step 802 Determine a third loss function according to the third high-frequency information and the second high-frequency information.
  • the third loss function is determined according to the third high-frequency information corresponding to the negative sample image and the second high-frequency information corresponding to the reference sample image, wherein the third loss function represents the relationship between the reference sample image and the negative sample image the distance between.
  • the calculation method of the third loss function can be obtained based on any algorithm for calculating the loss value.
  • it can be calculated based on the L1 loss function.
  • the L1 loss function is the mean absolute error (Mean Absolute Error, MAE), which is used to calculate the third high frequency the average of the distances between the message and the second high-frequency message;
  • the L2 loss function is the mean square error (Mean Square Error, MSE), which is used to calculate the average value of the square of the difference between the third high-frequency information and the second high-frequency information.
  • MSE mean square error
  • Step 803 determining a contrastive learning loss function according to the second loss function and the third loss function.
  • the contrastive learning loss function is determined according to the second loss function and the third loss function, wherein the contrastive learning loss function is used to make the high-frequency information of the reference sample image close to the high-frequency information of the positive sample image and away from the negative sample image.
  • the high-frequency information of the sample image is determined according to the second loss function and the third loss function, wherein the contrastive learning loss function is used to make the high-frequency information of the reference sample image close to the high-frequency information of the positive sample image and away from the negative sample image.
  • the ratio between the second loss function and the third loss function is calculated to obtain a contrastive learning loss function, wherein the second loss function is the ratio between the first high-frequency information and the second high-frequency information The L1 loss function of the average absolute error between them; the third loss function is an L1 loss function representing the average absolute error between the third high-frequency information and the second high-frequency information.
  • the corresponding first loss function is L1(F, F + ), and the second loss function is L1( F, F - ), then the corresponding contrastive learning loss function is the following formula (1), where CR is the contrastive learning loss function:
  • the sum of the loss functions of the third loss function and the second loss function is calculated, and the ratio of the second loss function and the sum of the loss functions is calculated as the comparison learning function, thereby determining the reference value based on the ratio
  • the training process of the target super-resolution network is close to the positive sample image at the high-frequency information level and far away from the negative sample image, thereby reducing the introduction of some artifacts and noise, and the output super-resolution image has rich details and less noise. Therefore, based on the target super-resolution network, the test image is subjected to super-resolution processing to obtain the target super-resolution image. On the basis of improving the detail richness of the image, the purity is relatively high.
  • Step 504 according to the first loss function and the comparative learning loss function, perform backpropagation to train the parameters of the neural network to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • the parameters of the neural network are backpropagated according to the first loss function and the comparative learning loss function to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • the fourth loss function can also be determined according to the reference sample image and the positive sample image, for example, the L1 loss function representing the mean absolute error is determined according to the reference sample image and the positive sample image, and for example, according to the reference sample image and the positive sample image Determine the L2 loss function representing the average value of the square of the difference, and then train the neural network according to the fourth loss function, the first loss function and the comparative learning loss function to obtain the target super-resolution network.
  • the reference sample image and the positive sample are close to the high-frequency information level while being far away from the negative sample image, thereby reducing the introduction of some artifacts and noise, and further
  • the fourth loss function training based on the reference sample image and the positive sample image strengthens the closeness of the reference sample image and the positive sample image at the feature level.
  • the sample image is a landscape image
  • the third high-frequency information is F -
  • the corresponding contrastive learning loss function is CR(F - , F, F + )
  • the input sample image is LR
  • the positive sample image is GT
  • the negative sample image is Neg
  • the reference sample image is SR
  • the fourth loss function L1 is determined according to the reference sample image and the positive sample image ( GT, SR)
  • the neural network is jointly trained based on the fourth loss function, the first loss function and the contrastive learning function.
  • the neural network when training the neural network, can also be trained only according to the first loss function and the comparative learning loss function, that is, according to the loss value of the first loss function and the comparative learning loss function Adjust the network parameters of the neural network until the loss values of the first loss function and the contrastive learning loss function are also less than the corresponding loss threshold, so as to obtain the target super-resolution network after training.
  • the reference sample image and the positive sample are close to the high-frequency information level while being far away from the negative sample image, thereby reducing the introduction of some artifacts and noise, and improving
  • the purity of the super-resolution image output by the trained target super-resolution network is improved.
  • the first loss function is L1(F, F + ), and the corresponding contrastive learning loss function is CR, And the positive sample image is GT, the negative sample image is Neg, and the reference sample image is SR, then continue to refer to FIG. 10 to jointly train the neural network according to the contrastive learning function and the first loss function.
  • the Fourier domain-based super-resolution image processing method provided by the embodiment of the present disclosure combines the distances between the input sample image and the positive sample image and the negative sample image respectively, and trains the loss value at the high-frequency information level to obtain the target super-resolution network , on the basis of ensuring the richness of the image details of the target super-resolution image output by the target super-resolution network, the purity of the target super-resolution image is improved.
  • the present disclosure also proposes a schematic diagram of a Fourier-domain-based super-resolution image processing device provided in FIG. 11 according to an embodiment of the present disclosure.
  • the device can be implemented by software and/or hardware. Can be integrated in electronic equipment.
  • the device includes: a first acquisition module 1110, a second acquisition module 1120, a determination module 1130 and a third acquisition module 1140, wherein,
  • the first acquisition module 1110 is configured to acquire a positive sample image and a reference sample image, wherein the positive sample image is the true value super-resolution image corresponding to the input sample image, and the reference sample image is the input sample image after The output image after the neural network to be trained reduces the image quality;
  • the second acquisition module 1120 is configured to perform Fourier transform processing on the positive sample image and the reference sample image respectively, and obtain first high-frequency information corresponding to the positive sample image in the Fourier domain, and second high-frequency information corresponding to the reference sample image;
  • a determining module 1130 configured to determine a first loss function according to the first high-frequency information and the second high-frequency information
  • the third acquisition module 1140 is used to backpropagate the parameters of the training neural network according to the first loss function to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution image.
  • the Fourier-domain-based super-resolution image processing device provided by the embodiments of the present disclosure can execute the Fourier-domain-based super-resolution image processing method provided by any embodiment of the present disclosure, and has corresponding functional modules and beneficial effects for executing the method .
  • the present disclosure also proposes a computer program product, including computer programs/instructions, which implement the Fourier domain-based super-resolution image processing method in the above embodiments when executed by a processor.
  • Fig. 12 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • FIG. 12 shows a schematic structural diagram of an electronic device 1200 suitable for implementing an embodiment of the present disclosure.
  • the electronic device 1200 in the embodiment of the present disclosure may include, but is not limited to, mobile phones, notebook computers, digital broadcast receivers, PDAs (Personal Digital Assistants), PADs (Tablet Computers), PMPs (Portable Multimedia Players), vehicle-mounted terminals ( Mobile terminals such as car navigation terminals) and stationary terminals such as digital TVs, desktop computers and the like.
  • the electronic device shown in FIG. 12 is only an example, and should not limit the functions and application scope of the embodiments of the present disclosure.
  • an electronic device 1200 may include a processing device (such as a central processing unit, a graphics processing unit, etc.) 1201, which may be randomly accessed according to a program stored in a read-only memory (ROM) 1202 or loaded from a storage device 1208. Various appropriate actions and processes are executed by programs in the memory (RAM) 1203 . In the RAM 1203, various programs and data necessary for the operation of the electronic device 1200 are also stored.
  • the processing device 1201, ROM 1202, and RAM 1203 are connected to each other through a bus 1204.
  • An input/output (I/O) interface 1205 is also connected to the bus 1204 .
  • the following devices can be connected to the I/O interface 1205: input devices 1206 including, for example, a touch screen, touchpad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a liquid crystal display (LCD), speaker, vibration an output device 1207 such as a computer; a storage device 1208 including, for example, a magnetic tape, a hard disk, etc.; and a communication device 1209.
  • the communication means 1209 may allow the electronic device 1200 to perform wireless or wired communication with other devices to exchange data. While FIG. 12 shows electronic device 1200 having various means, it is to be understood that implementing or having all of the means shown is not a requirement. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product, which includes a computer program carried on a non-transitory computer readable medium, where the computer program includes program code for executing the method shown in the flowchart.
  • the computer program may be downloaded and installed from a network via communication means 1209, or from storage means 1208, or from ROM 1202.
  • the processing device 1201 When the computer program is executed by the processing device 1201, the above-mentioned functions defined in the Fourier domain-based super-resolution image processing method of the embodiment of the present disclosure are executed.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium or any combination of the two.
  • a computer readable storage medium may be, for example, but not limited to, an electrical, magnetic, optical, electromagnetic, infrared, or semiconductor system, device, or device, or any combination thereof. More specific examples of computer-readable storage media may include, but are not limited to, electrical connections with one or more wires, portable computer diskettes, hard disks, random access memory (RAM), read-only memory (ROM), erasable Programmable read-only memory (EPROM or flash memory), optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the above.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program that can be used by or in conjunction with an instruction execution system, apparatus, or device.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the foregoing.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium, which can send, propagate, or transmit a program for use by or in conjunction with an instruction execution system, apparatus, or device .
  • Program code embodied on a computer readable medium may be transmitted by any appropriate medium, including but not limited to: wires, optical cables, RF (radio frequency), etc., or any suitable combination of the above.
  • the client and the server can communicate using any currently known or future network protocols such as HTTP (HyperText Transfer Protocol, Hypertext Transfer Protocol), and can communicate with digital data in any form or medium
  • HTTP HyperText Transfer Protocol
  • the communication eg, communication network
  • Examples of communication networks include local area networks (“LANs”), wide area networks (“WANs”), internetworks (e.g., the Internet), and peer-to-peer networks (e.g., ad hoc peer-to-peer networks), as well as any currently known or future developed network of.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device, or may exist independently without being incorporated into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs, and when the above-mentioned one or more programs are executed by the electronic device, the electronic device: acquires a positive sample image and a reference sample image corresponding to the input sample image, wherein, the positive The sample image is the true value super-resolution image corresponding to the input sample image, and the reference sample image is the output image after the input sample image has been processed by the neural network to be trained to reduce the image quality, and Fourier transform is performed on the positive sample image and the reference sample image respectively processing, acquiring the first high-frequency information corresponding to the positive sample image in the Fourier domain, and the second high-frequency information corresponding to the reference sample image, and determining the first loss according to the first high-frequency information and the second high-frequency information function, and further, according to the first loss function, perform backpropagation to train the parameters of the neural network to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the
  • the training of the super-resolution network based on the loss function in the Fourier domain is realized, so that the trained super-resolution network learns high-frequency prior information, and the detail richness of the super-resolution image generated by the lightweight network is guaranteed.
  • Computer program code for carrying out operations of the present disclosure may be written in one or more programming languages, or combinations thereof, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages - such as the "C" language or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as through an Internet service provider). Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as AT&T, MCI, Sprint, EarthLink, MSN, GTE, etc.
  • each block in a flowchart or block diagram may represent a module, program segment, or portion of code that contains one or more logical functions for implementing specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented by a dedicated hardware-based system that performs the specified functions or operations , or may be implemented by a combination of dedicated hardware and computer instructions.
  • the units involved in the embodiments described in the present disclosure may be implemented by software or by hardware. Wherein, the name of a unit does not constitute a limitation of the unit itself under certain circumstances.
  • FPGAs Field Programmable Gate Arrays
  • ASICs Application Specific Integrated Circuits
  • ASSPs Application Specific Standard Products
  • SOCs System on Chips
  • CPLD Complex Programmable Logical device
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in conjunction with an instruction execution system, apparatus, or device.
  • a machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • a machine-readable medium may include, but is not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, apparatus, or devices, or any suitable combination of the foregoing.
  • machine-readable storage media would include one or more wire-based electrical connections, portable computer discs, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), optical fiber, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM or flash memory erasable programmable read only memory
  • CD-ROM compact disk read only memory
  • magnetic storage or any suitable combination of the foregoing.
  • the present disclosure provides a Fourier domain-based super-resolution image processing method, including: acquiring a positive sample image and a reference sample image, wherein the positive sample image is the The true value super-resolution image corresponding to the input sample image, the reference sample image is an image output after the input sample image is processed by the neural network to be trained to reduce the image quality;
  • the determining of the first loss according to the first high-frequency information and the second high-frequency information functions including:
  • An L1 loss function representing a mean absolute error is determined according to the first high-frequency information and the second high-frequency information.
  • the Fourier domain-based super-resolution image processing method provided by the present disclosure further includes:
  • Said performing backpropagation according to said first loss function to train the parameters of said neural network to obtain a target super-score network comprising:
  • the Fourier domain-based super-resolution image processing method provided by the present disclosure further includes:
  • the negative sample image is an image obtained by fusion and noise processing of the input sample image and the positive sample image;
  • the first loss function and the comparative learning loss function perform backpropagation to train the parameters of the neural network to obtain the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution sub-image.
  • the generation process of the negative sample image includes:
  • the second high-frequency information and the first The three high-frequency information determines the contrastive learning loss function, including:
  • the contrastive learning loss function is determined according to the second loss function and the third loss function.
  • the determination of the contrastive learning loss according to the second loss function and the third loss function functions including:
  • the second loss function represents the first high-frequency information and the second high-frequency information
  • the third loss function is an L1 loss function representing an average absolute error between the third high-frequency information and the second high-frequency information.
  • the Fourier domain-based super-resolution image processing method provided by the present disclosure further includes:
  • performing backpropagation to train the parameters of the neural network to obtain a target super-scoring network includes:
  • the parameters of the neural network are trained by backpropagation to obtain a target super-resolution network.
  • the determination of the fourth loss function according to the reference sample image and the positive sample image includes:
  • An L1 loss function representing a mean absolute error is determined according to the reference sample image and the positive sample image.
  • the present disclosure provides a Fourier domain-based super-resolution image processing device, including: a first acquisition module, configured to acquire a positive sample image and a reference sample image, wherein the The positive sample image is a true value super-resolution image corresponding to the input sample image, and the reference sample image is an image output after the input sample image is processed by a neural network to be trained to reduce the image quality;
  • the second acquisition module is configured to perform Fourier transform processing on the positive sample image and the reference sample image respectively, acquire the first high-frequency information corresponding to the positive sample image in the Fourier domain, and the second high-frequency information corresponding to the reference sample image;
  • a determining module configured to determine a first loss function according to the first high-frequency information and the second high-frequency information
  • the third acquisition module is used to perform backpropagation according to the first loss function to train the parameters of the neural network, and acquire the target super-resolution network, so as to perform super-resolution processing on the test image according to the target super-resolution network to obtain the target super-resolution sub-image.
  • the determination module is specifically used for:
  • An L1 loss function representing a mean absolute error is determined according to the first high-frequency information and the second high-frequency information.
  • the Fourier domain-based super-resolution image processing device further includes:
  • the fourth acquisition module is used to extract the first feature corresponding to the first high-frequency information and the second feature corresponding to the second high-frequency information through the discriminant model of the generative confrontation network GAN;
  • a fifth acquisition module configured to perform discrimination processing on the first feature and the second feature, and acquire a first score corresponding to the positive sample image and a second score corresponding to the reference sample image;
  • the first loss function determination module is used to determine the binary cross entropy BCE loss function according to the first score and the second score;
  • the third obtaining module is specifically configured to: perform backpropagation according to the BCE loss function and the first loss function to train parameters of the neural network, and obtain a target super-resolution network.
  • the Fourier domain-based super-resolution image processing device further includes:
  • a sixth acquisition module configured to acquire a negative sample image, wherein the negative sample image is an image obtained by fusing and adding noise to the input sample image and the positive sample image;
  • the seventh acquisition module is configured to perform Fourier transform processing on the negative sample image, and obtain third high-frequency information corresponding to the negative sample image in the Fourier domain;
  • a second loss function determination module configured to determine a contrastive learning loss function according to the first high-frequency information, the second high-frequency information, and the third high-frequency information, wherein the contrastive learning loss function is used to use
  • the high-frequency information of the reference sample image is close to the high-frequency information of the positive sample image, and far away from the high-frequency information of the negative sample image;
  • the third acquisition module is specifically configured to: perform backpropagation according to the first loss function and the comparative learning loss function to train the parameters of the neural network, and acquire a target super-scoring network, so as to obtain a target super-scoring network according to the target super-scoring
  • the network performs super-resolution processing on the test image to obtain the target super-resolution image.
  • the sixth acquisition module is specifically used for:
  • the second loss function determination module is specifically used for:
  • the contrastive learning loss function is determined according to the second loss function and the third loss function.
  • the second loss function determination module is specifically used for:
  • the second loss function represents the first high-frequency information and the second high-frequency information
  • the third loss function is an L1 loss function representing an average absolute error between the third high-frequency information and the second high-frequency information.
  • the Fourier domain-based super-resolution image processing device further includes:
  • a third loss function determination module configured to determine a fourth loss function according to the reference sample image and the positive sample image
  • the third acquisition module is specifically configured to: perform backpropagation to train parameters of the neural network according to the fourth loss function, the first loss function, and the comparative learning loss function, and acquire a target super-resolution network.
  • the third loss function determination module is specifically used for:
  • An L1 loss function representing a mean absolute error is determined according to the reference sample image and the positive sample image.
  • the present disclosure provides an electronic device, including:
  • the processor is configured to read the executable instructions from the memory, and execute the instructions to implement any one of the Fourier domain-based super-resolution image processing methods provided in the present disclosure.
  • the present disclosure provides a computer-readable storage medium, the storage medium stores a computer program, and the computer program is used to execute any one of the methods based on the present disclosure.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Data Mining & Analysis (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

Des modes de réalisation de la présente divulgation concernent un procédé et un appareil de traitement des images à super-résolution basé sur un domaine de Fourier, un dispositif et un support. Le procédé comprend : l'obtention d'une image d'échantillon positif et d'une image d'échantillon de référence ; la réalisation d'une transformée de Fourier sur l'image d'échantillon positif et l'image d'échantillon de référence, respectivement, et l'obtention de premières informations à haute fréquence correspondant à l'image d'échantillon positif dans un domaine de Fourier et de secondes informations à haute fréquence correspondant à l'image d'échantillon de référence ; la détermination d'une première fonction de perte en fonction des premières informations à haute fréquence et des secondes informations à haute fréquence ; et la rétro-propagation, en fonction de la première fonction de perte, de paramètres pour l'entraînement d'un réseau de neurones artificiels pour obtenir un réseau de super-résolution cible, de manière à effectuer un traitement de super-résolution sur une image de test selon le réseau de super-résolution cible pour obtenir une image de super-résolution cible. Par conséquent, l'entraînement du réseau de super-résolution sur la base de la fonction de perte du domaine de Fourier est réalisé, de sorte que le réseau de super-résolution entraîné apprend des informations préalables à haute fréquence, ce qui permet d'assurer la richesse en détail de l'image à super-résolution générée par un réseau léger.
PCT/CN2022/129310 2021-11-25 2022-11-02 Procédé et appareil de traitement des images à super-résolution basé sur un domaine de fourier, dispositif et support WO2023093481A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202111413382.7 2021-11-25
CN202111413382.7A CN116188254A (zh) 2021-11-25 2021-11-25 基于傅里叶域的超分图像处理方法、装置、设备及介质

Publications (1)

Publication Number Publication Date
WO2023093481A1 true WO2023093481A1 (fr) 2023-06-01

Family

ID=86431139

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/129310 WO2023093481A1 (fr) 2021-11-25 2022-11-02 Procédé et appareil de traitement des images à super-résolution basé sur un domaine de fourier, dispositif et support

Country Status (2)

Country Link
CN (1) CN116188254A (fr)
WO (1) WO2023093481A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994636A (zh) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 基于交互学习的穿刺目标识别方法、系统及存储介质

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945125A (zh) * 2017-11-17 2018-04-20 福州大学 一种融合频谱估计法和卷积神经网络的模糊图像处理方法
US20210089866A1 (en) * 2019-09-24 2021-03-25 Robert Bosch Gmbh Efficient black box adversarial attacks exploiting input data structure
CN112967185A (zh) * 2021-02-18 2021-06-15 复旦大学 基于频率域损失函数的图像超分辨率算法

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945125A (zh) * 2017-11-17 2018-04-20 福州大学 一种融合频谱估计法和卷积神经网络的模糊图像处理方法
US20210089866A1 (en) * 2019-09-24 2021-03-25 Robert Bosch Gmbh Efficient black box adversarial attacks exploiting input data structure
CN112967185A (zh) * 2021-02-18 2021-06-15 复旦大学 基于频率域损失函数的图像超分辨率算法

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FUOLI DARIO; VAN GOOL LUC; TIMOFTE RADU: "Fourier Space Losses for Efficient Perceptual Image Super-Resolution", 2021 IEEE/CVF INTERNATIONAL CONFERENCE ON COMPUTER VISION (ICCV), IEEE, 10 October 2021 (2021-10-10), pages 2340 - 2349, XP034093843, DOI: 10.1109/ICCV48922.2021.00236 *
SHA HAO, LIU YANGZE, ZHANG YONGBING: "Fourier Ptychography Based on Deep Learning", LASER & OPTOELECTRONICS PROGRESS, vol. 58, no. 18, 30 September 2021 (2021-09-30), pages 1811020 - 1811020-10, XP093068640 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117994636A (zh) * 2024-04-03 2024-05-07 华中科技大学同济医学院附属协和医院 基于交互学习的穿刺目标识别方法、系统及存储介质

Also Published As

Publication number Publication date
CN116188254A (zh) 2023-05-30

Similar Documents

Publication Publication Date Title
WO2021114832A1 (fr) Procédé, appareil, dispositif électronique, et support de stockage d'amélioration de données d'image d'échantillon
WO2022105638A1 (fr) Procédé et appareil de traitement de dégradation d'image, ainsi que support de stockage et dispositif électronique
WO2022252881A1 (fr) Procédé et appareil de traitement d'image, support lisible et dispositif électronique
WO2022227886A1 (fr) Procédé de génération d'un modèle de réseau de réparation à super-résolution, et procédé et appareil de réparation à super-résolution d'image
WO2022105779A1 (fr) Procédé de traitement d'image, procédé d'entraînement de modèle, appareil, support et dispositif
WO2022161357A1 (fr) Procédé et appareil d'acquisition d'échantillon d'apprentissage basés sur l'augmentation de données, et dispositif électronique
WO2022012179A1 (fr) Procédé et appareil pour générer un réseau d'extraction de caractéristique, dispositif et support lisible par ordinateur
WO2023217117A1 (fr) Procédé et appareil d'évaluation d'image ainsi que dispositif, support d'enregistrement et produit-programme
WO2022171036A1 (fr) Procédé de suivi de cible vidéo, appareil de suivi de cible vidéo, support de stockage et dispositif électronique
WO2022105622A1 (fr) Procédé et appareil de segmentation d'image, support lisible et dispositif électronique
WO2023030427A1 (fr) Procédé d'entraînement pour modèle génératif, procédé et appareil d'identification de polypes, support et dispositif
WO2023093481A1 (fr) Procédé et appareil de traitement des images à super-résolution basé sur un domaine de fourier, dispositif et support
CN112418249A (zh) 掩膜图像生成方法、装置、电子设备和计算机可读介质
WO2023274005A1 (fr) Procédé et appareil de traitement d'images, dispositif électronique et support de stockage
CN111402133A (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN113688928B (zh) 图像匹配方法、装置、电子设备和计算机可读介质
CN118071428A (zh) 用于多模态监测数据的智能处理系统及方法
CN111402159B (zh) 图像处理方法、装置、电子设备及计算机可读介质
CN111311609B (zh) 一种图像分割方法、装置、电子设备及存储介质
WO2023116744A1 (fr) Procédé et appareil de traitement d'image, dispositif et support
WO2023138540A1 (fr) Procédé et appareil d'extraction de bord, dispositif électronique et support de stockage
WO2023143118A1 (fr) Procédé et appareil de traitement d'image, dispositif et support
WO2023130925A1 (fr) Procédé et appareil de reconnaissance de police, support lisible et dispositif électronique
WO2023016290A1 (fr) Procédé et appareil de classification de vidéo, support lisible et dispositif électronique
WO2023103682A1 (fr) Procédé et appareil de traitement d'image, dispositif et support

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22897564

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE