WO2024017230A1 - 图像处理方法、装置、电子设备及存储介质 - Google Patents

图像处理方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2024017230A1
WO2024017230A1 PCT/CN2023/107857 CN2023107857W WO2024017230A1 WO 2024017230 A1 WO2024017230 A1 WO 2024017230A1 CN 2023107857 W CN2023107857 W CN 2023107857W WO 2024017230 A1 WO2024017230 A1 WO 2024017230A1
Authority
WO
WIPO (PCT)
Prior art keywords
image processing
processing model
image
training
model
Prior art date
Application number
PCT/CN2023/107857
Other languages
English (en)
French (fr)
Inventor
任玉羲
吴捷
张朋
肖学锋
Original Assignee
北京字跳网络技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京字跳网络技术有限公司 filed Critical 北京字跳网络技术有限公司
Publication of WO2024017230A1 publication Critical patent/WO2024017230A1/zh

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Definitions

  • Embodiments of the present disclosure relate to the field of image processing technology, such as an image processing method, device, electronic device, and storage medium.
  • Some image generators can generate images in one image domain based on images in another image domain. For example, high-resolution images can be generated from low-resolution images. Its unique image generation capabilities have a wide range of application scenarios.
  • Embodiments of the present disclosure provide an image processing method, device, electronic device, and storage medium.
  • an embodiment of the present disclosure provides an image processing method, including:
  • the first image processing model and the second image processing model are generated through online alternate training, and the supervision information during the training process of the first image processing model includes at least part of the images generated by the second image processing model during the training process.
  • the model scale of the first image processing model is smaller than the model scale of the second image processing model;
  • embodiments of the present disclosure also provide an image processing device, including:
  • the image acquisition module is configured to obtain the original image to be processed
  • An input module configured to input the original image into the first image processing model
  • a generation module configured to process the original image by the first image processing model to generate a target image; wherein the first image processing model and the second image processing model are alternately trained and generated online, and the first image processing model
  • the supervision information during the training process includes at least part of the images generated by the second image processing model during the training process, and the model size of the first image processing model is smaller than the model size of the second image processing model;
  • An output module is configured to output the target image.
  • embodiments of the present disclosure also provide an electronic device, where the electronic device includes:
  • processors one or more processors
  • a storage device configured to store one or more programs
  • the one or more processors When the one or more programs are executed by the one or more processors, the one or more processors are caused to implement the image processing method described in any one of the embodiments of the present disclosure.
  • embodiments of the disclosure further provide a storage medium containing computer-executable instructions, which when executed by a computer processor are used to perform image processing as described in any embodiment of the disclosure. method.
  • Figure 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure
  • Figure 2 is a schematic flowchart of the generation of pseudo-label images in an image processing method provided by an embodiment of the present disclosure
  • Figure 3 is a schematic diagram of the training framework of the second image processing model in an image processing method provided by an embodiment of the present disclosure
  • Figure 4 is a schematic diagram of the training framework when the first image processing model uses pseudo-label images as supervision information in an image processing method provided by an embodiment of the present disclosure
  • Figure 5 is a schematic diagram of the training framework when the first image processing model uses the first image as supervision information in an image processing method provided by an embodiment of the present disclosure
  • Figure 6 is a general training framework diagram of the first image processing model in an image processing method provided by an embodiment of the present disclosure
  • Figure 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • FIG. 8 is a schematic structural diagram of an electronic device provided by an embodiment of the present disclosure.
  • the training of image generators requires a large amount of high-quality paired data to guide the network to learn the mapping relationships between different image domains.
  • the cost of producing paired images is extremely high, and they need to be retouched one by one according to the retouching instructions, resulting in high training data production costs.
  • embodiments of the present disclosure provide an image processing method, device, and electronic equipment and storage media.
  • the term “include” and its variations are open-ended, ie, “including but not limited to.”
  • the term “based on” means “based at least in part on.”
  • the term “one embodiment” means “at least one embodiment”; the term “another embodiment” means “at least one additional embodiment”; and the term “some embodiments” means “at least some embodiments”. Relevant definitions of other terms will be given in the description below.
  • FIG. 1 is a schematic flowchart of an image processing method provided by an embodiment of the present disclosure.
  • Embodiments of the present disclosure can perform image processing based on the small-scale model after distilling the small-scale model through the large-scale model.
  • the method may be performed by an image processing device, which may be implemented in the form of software and/or hardware, and which may be configured in an electronic device, such as a computer device.
  • the image processing method provided by this embodiment may include:
  • the original image is processed by the first image processing model to generate the target image; wherein the first image processing model and the second image processing model are alternately trained and generated online, and the supervision information during the training process of the first image processing model includes the second image processing
  • the model size of the first image processing model is smaller than the model size of the second image processing model
  • the image processing method may refer to a method of generating a target image of another image domain based on an original image of one image domain.
  • the process of generating a target image according to the original image may be performed by the first image processing model.
  • the first image processing model and the second image processing model can be generators in a generative adversarial network (Generative Adversarial Networks, GAN), or other models that can generate images of one image domain based on images of another image domain.
  • GAN Geneative Adversarial Networks
  • the model scale of the first image processing model is smaller than the model scale of the second image processing model, which may mean that the model width (also called the number of channels of the model) of the first image processing model is smaller than the model width of the second image processing model; and/ Or, the model depth of the first image processing model (also called the number of network layers of the model) is smaller than the model depth of the second image processing model.
  • the first image processing model is a simple model with a smaller scale
  • the second image processing model is a complex model with a larger scale.
  • the first image processing model can be trained by using at least part of the images generated by the second image processing model as supervision information (ie, performing model distillation on the first image processing model through the second image processing model). The performance is closer to the second image processing model.
  • the first image processing model can only be optimized using the second image processing model, that is, the supervision information of the first image processing model can all come from the second image processing model.
  • the first image processing model may utilize images generated by the second image processing model based on labeled samples and at least part of pseudo-labeled images generated based on unlabeled samples as supervisory information for optimization.
  • the second image processing model can be called a teacher generator, and the first image processing model is a student generator.
  • model distillation model scale compression can be achieved, which is beneficial to deploying small-scale, high-performance first image processing models in devices with limited resources.
  • the generated images of the second image processing model are used as supervision information to train the first image processing model, then the first image processing model will completely start from scratch to the second image processing model that has been trained.
  • the image processing models are brought closer together, which will take a long time and a large amount of calculation to train the first image processing model to a level equivalent to the performance of the second image processing model.
  • the first image processing model and the second image processing model can be generated through online alternate training.
  • the online alternating training process may include, for example: the second image processing model is trained based on the true label data obtained in the current round; the first image processing model is trained based on the true label data obtained in the current round.
  • the images generated by the second image processing model are supervision information, and the training process of the second image processing model in the current round is imitated.
  • the true label data pair is a data pair consisting of a labeled sample and a true labeled image; the labeled sample has the same image domain as the original image, and the true labeled image has the same image domain as the target image.
  • the first image processing model is distilled once after each iterative training, so that the first image processing model can progressively follow the training of the second image processing model.
  • This progressive alternate training can complete model distillation from the second image processing model to the first image processing model with a small amount of calculation.
  • this progressive alternating training method may be called online distillation.
  • training the first image processing model through online distillation can reduce the model calculation amount by 30%.
  • the training process of the second image processing model may refer to the entire process from the beginning of training to the completion of training of the second image processing model.
  • the training process of the second image processing model in addition to training based on true label data, it can also generate pseudo-label images based on unlabeled samples; where the unlabeled samples have the same image domain as the original samples, and the pseudo-label images have the same image domain as the original samples.
  • the image domain of the target image is the same. That is to say, in addition to following each round of iterative training of the second image processing model, the first image processing model can also perform independent training based on at least part of the pseudo-label images. Therefore, it is possible to expand the amount of training data of the first image processing model and improve the generalization of the first image processing model without paying the cost of producing additional paired data.
  • pseudo-labeled images can provide supervisory information for the first image processing model, they cannot provide supervisory information for the second image processing model. That is, the first image processing model can be trained based on the pseudo-label image, but the second image processing model cannot be trained based on the pseudo-label image. After the first image processing model is trained independently based on pseudo-labeled images, the performance may deviate. Therefore, independent training can be performed based on some of the better quality pseudo-labeled images to reduce this performance bias.
  • the second image processing model when the second image processing model generates the pseudo-label images, it is again based on the true label data.
  • the first image processing model will continue to follow the training of the second image processing model. Therefore, the deviation introduced by the pseudo-label to the first image processing model can be corrected in time, and the image generation quality of the first image processing model can also be ensured, thereby achieving the effect of cost reduction and efficiency improvement.
  • the first image processing model thus obtained ensures excellent image processing performance while achieving lightweight.
  • the first image processing model can be applied to lightweight terminal devices.
  • lightweight terminal equipment can be considered as terminal equipment with limited resources, such as mobile phones and other terminal equipment.
  • the first image processing model completed by the training has a small model size, good model generalization and generated image quality, which effectively reduces the deployment of the model on resource-constrained mobile terminal devices or other lightweight IoT devices. Difficulty.
  • a high-quality target image can be generated based on the original image.
  • the embodiments of this disclosure realize collaborative compression of model dimensions and training data dimensions during the GAN training process, as follows:
  • the first image processing model can be gradually guided to progressively learn the optimization process of the second image processing model, so that the first image processing model can be more efficient.
  • the small amount of calculation outputs an image with a quality similar to that of the second image processing model, completing the compression of model dimensions.
  • the traditional training method based on true label data pairs can be used. It is transformed into a method of training collaborative pseudo-label images based on true label data, thus completing the compression of the required amount of true label data, that is, completing the compression of the training data dimension.
  • Pseudo-label images can bring additional supervision information to the training of the first image processing model, expand the amount of training data of the first image processing model without paying the production cost of additional paired data, and improve the first image processing
  • the generalization of the model is more conducive to the model learning the structural features of the image domain of the image to be generated.
  • the first image processing model is a smaller-scale model
  • the second image processing model is a larger-scale model.
  • larger models generally perform better than smaller models.
  • the first image processing model and the second image processing model are alternately trained online, and the first image processing model is trained using the images generated during the training process of the second image processing model as supervision information, which can enable the first image processing model to imitate the second image Process each round of iterative training process of the model and follow the training step by step.
  • the model distillation from the second image processing model to the first image processing model can be completed with a small amount of calculation, so that the performance of the first image processing model is closer to that of the second image processing model.
  • the second image processing model in addition to training based on real labeled data, it can also generate pseudo-labeled images based on unlabeled samples. That is, in addition to following each round of iterative training process of the second image processing model, the first image processing model can also conduct independent training based on at least part of the pseudo-label images, which can expand without paying the production cost of additional paired data. Increase the amount of training data of the first image processing model and improve the generalization of the first image processing model.
  • the pseudo-label image can provide training supervision information for the first image processing model, it cannot provide training supervision information for the third image processing model.
  • the second image processing model provides supervision information for training.
  • the performance of the first image processing model may deviate after training based on pseudo-labeled images.
  • the pseudo-label image is generated during the training process of the second image processing model, when the second image processing model generates the pseudo-label image and then trains it again based on the real label data, the first image processing model will continue to follow the second image processing Model training. Therefore, the deviation introduced by the pseudo-label to the first image processing model can be corrected in time, ensuring the image generation quality of the first image processing model, and achieving the effect of cost reduction and efficiency improvement.
  • the embodiments of the present disclosure can be combined with each example of the image processing method provided in the above embodiments.
  • the second image processing model may be trained based on true labeled data pairs during the training process, and generate pseudo-labeled images based on unlabeled samples.
  • the generation process of pseudo-label images is described.
  • FIG. 2 is a schematic flowchart of generating a pseudo-label image in an image processing method provided by an embodiment of the present disclosure. As shown in Figure 2, in the image processing method provided by this embodiment, the pseudo label image can be generated based on the following steps:
  • the second image processing model may be a generator in the GAN, and may perform adversarial training together with the discriminator in the GAN.
  • FIG. 3 is a schematic diagram of the training framework of the second image processing model in an image processing method provided by an embodiment of the present disclosure. Referring to Figure 3, in some implementations, the second image processing model and the discriminator perform adversarial training based on true label data pairs, which may include:
  • the labeled sample x i in is used as the input of the second image processing model G T ; through the second image processing model G T , the first image is generated based on the labeled sample x i Get the true label data pair , the true labeled image y i corresponding to the labeled sample xi ; through the discriminator D, the first image is discriminated Whether it is of the same type as the true labeled image yi ; with the discriminator identifying the same type as the goal, the second image processing model is trained; with the discriminator identifying different types as the goal, the discriminator is trained.
  • N the number of true label data pairs.
  • the true label data pair is used to supervise the training of the second image processing model G T.
  • the generation adversarial loss LGAN LGAN (G T ,D) can be used to train the second image processing model G T and the discriminator D.
  • the second image processing model GT is trained to map xi to yi
  • the discriminator D is trained to distinguish the image p t generated by GT from the true labeled image yi .
  • the generation adversarial loss can be expressed by the following formula:
  • x is each labeled sample
  • y is each true labeled image
  • G T (x) is each first image generated by the second image processing model based on each labeled sample. It can represent the expectation function under data (x, y), can represent the expectation function under data x.
  • the process of adversarial training may also include: based on the first image With the true label image y, the reconstruction loss L recon is determined; according to the reconstruction loss L recon , the second image processing model G T is trained.
  • the reconstruction loss L recon can be expressed by the following formula: The meanings of the same symbols can be found above.
  • the output of the second image processing model can be made close to the real label image.
  • the complete optimization loss function of the second image processing model G T can be expressed by the following formula:
  • obtaining unlabeled samples as input to the second image processing model may include, for example: during the interval of adversarial training based on true label data pairs, Obtain unlabeled samples as input to the second image processing model.
  • Obtaining unlabeled samples may be, for example, randomly extracting unlabeled samples from the unlabeled sample set.
  • the number of unlabeled samples obtained each time may be at least one.
  • the second image processing model can continue to perform adversarial training after generating candidate pseudo labels. That is to say, after the first image processing model is trained based on the pseudo-label images, it can continue to imitate the optimization process of the second image processing model, thereby not only improving the generalization of the first image processing model, but also making up for the possible introduction of pseudo-label images. The deviation ensures the performance of the first image processing model.
  • the pseudo-label image generated by the second image processing model can be made more consistent with the structural characteristics of the image domain of the image to be generated, and to a certain extent, it can be used for the first image processing Models provide better supervisory information.
  • obtaining unlabeled samples as input to the second image processing model may include: Each time a preset number of true label data pairs are obtained, after adversarial training is performed on the second image processing model and the discriminator, unlabeled samples are obtained as input to the second image processing model.
  • unlabeled samples are obtained at intervals for adversarial training based on true labeled data pairs. This may be to obtain unlabeled samples every time a preset number (such as 1, 2, etc.) of true labeled data pairs are obtained for adversarial training. sample. Wherein, the preset number is inversely related to a degree of compression of the training data of the first image processing model to a certain extent.
  • the candidate pseudo-label images are all used to train the first image processing model, it can be The first image processing model compresses 50% of the demand for true label data pairs; if after every 2 true label data pairs are obtained for adversarial training, 1 unlabeled sample is obtained to generate a candidate pseudo-label image, then in the candidate pseudo-label image When both are used to train the first image processing model, 33% of the required amount of true label data pairs can be compressed for the first image processing model.
  • reducing the number of presets may also introduce a slightly larger amount of calculation to the training process of the first image processing model simulating the second image processing model. Therefore, the size of the preset number can be set according to the actual situation to balance the training data compression amount and the training calculation amount of the first image processing model.
  • the second image processing model can generate candidate pseudo labels based on unlabeled samples based on current model parameters in training. And after the second image processing model generates pseudo-labeled images based on the obtained unlabeled samples, it can also continue to conduct adversarial training based on true labeled data pairs. The second image processing model is continued to be optimized based on the supervision information provided by the true label data pair, so that the first image processing model continues to imitate the optimization process of the second image processing model.
  • S230 Screen the candidate pseudo-label images through the discriminator to obtain the final pseudo-label image.
  • M represents the number of unlabeled samples.
  • the discriminator D can well judge the quality of the image generated by the current second image processing model G T. For each candidate pseudo-label image If the discriminator D thinks that the image is close to the real image, then Higher quality if discriminator D can discern is not a real image, then Lower quality.
  • the discriminator can be used to select pseudo-label images with higher image quality from a large number of candidate pseudo-labels and send them to the first image processing model for training.
  • the filtered high-quality pseudo-label images can be input into the first image processing model immediately for training; the first image processing model can also be input non-immediately for training. For example, after accumulating a certain number of pseudo-label images, the first image processing model can be input at once for training, etc.
  • the input timing of the pseudo-label image is not strictly limited here. Other methods of inputting the pseudo-label image into the first image processing model can also be applied here, and will not be exhaustive here.
  • screening candidate pseudo-label images through a discriminator may include: evaluating the authenticity of candidate pseudo-label images through the discriminator to obtain evaluation results; and evaluating candidate pseudo-label images based on preset evaluation standards and evaluation results. Filter images.
  • the discriminator D is used to compare the candidate pseudo-label images
  • the images are evaluated for authenticity and an evaluation score can be obtained.
  • the evaluation criterion may be a preset threshold ⁇ thre .
  • ⁇ thre and Candidate pseudo-label images Filtering can include: Candidate pseudo-label images exceeding ⁇ thre are used as pseudo-label images input to the first image processing model, and candidate pseudo-label images lower than ⁇ thre are discarded, thereby ensuring that the training data volume of the first image processing model is expanded while ensuring Quality of training data.
  • the discriminator in order to ensure the quality of pseudo-label images, can be used to filter candidate pseudo-label images.
  • the selected high-quality pseudo-label images can improve the generalization of the student generator. This screening method helps to mine the structural features of unlabeled samples in the same style, and can complement the true label data pairs to train the first image processing model, thus alleviating the expensive and time-consuming training data generation and selection process. Play a role in reducing costs and increasing efficiency.
  • the embodiment of the present disclosure describes the generation process of pseudo-label images.
  • the second image processing model trained based on the current true label data to generate pseudo label data based on the unlabeled data, it is possible to provide additional supervision information for the training of the first image processing model.
  • the image processing method provided by the embodiments of the present disclosure and the image processing method provided by the above embodiments belong to the same disclosed concept.
  • Technical details that are not described in detail in this embodiment can be referred to the above embodiments, and the same technical features are used in this embodiment and the above embodiments.
  • the above embodiments have the same beneficial effects.
  • the embodiments of the present disclosure can be combined with each example of the image processing method provided in the above embodiments.
  • the image processing method provided in this embodiment describes the training steps of the first image processing model.
  • the training process of the first image processing model not only the first image generated by the second image processing model based on labeled samples can be used as supervision information, but also the second image processing model can be used based on unlabeled samples. At least part of the generated pseudo-labeled images are used as supervision information for training.
  • the first image processing model can be trained to improve the first image processing model. Generalization and quality of generated images.
  • FIG. 4 is a schematic diagram of the training framework when the first image processing model uses pseudo-label images as supervisory information in an image processing method provided by an embodiment of the present disclosure.
  • the first image processing model is trained based on the following steps:
  • the pseudo label image can be and second image
  • the reconstruction loss between and/or, in some implementations, based on pseudo-labeled images and second image Determining the distillation loss may include: generating a pseudo-label image according to the second image processing model G T Characteristic images of the process and the first image processing model G S generates the second image Characteristic images of the process Determine the perceptual loss L preu ; use the perceptual loss L preu as the distillation loss
  • the perceptual loss L preu is used to measure the pseudo-label image and second image
  • the perceptual loss L preu may include at least one of the following: feature reconstruction loss L fea and style reconstruction loss L style .
  • the feature reconstruction loss L fea can encourage and With similar feature representations, these feature representations can be obtained from the ⁇ metric of a pre-trained network, such as the ⁇ metric of a super-resolution test sequence network (Visual Geometry Group, VGG). Among them, the feature reconstruction loss L fea can be defined as follows:
  • ⁇ j (x) represents the activation value of x in the jth layer of the VGG network (ie, the feature image);
  • 1 represents the one-dimensional norm;
  • C j ⁇ H j ⁇ W j represents ⁇ j (x ) dimension.
  • C j represents the number of channels
  • H j represents height
  • W j represents width
  • style reconstruction loss L style is introduced to penalize and Differences in stylistic features, such as differences in color, texture, general pattern, etc.
  • style reconstruction loss L style can be defined as:
  • the label-free distillation loss may include a reconstruction loss and/or a perceptual loss between the pseudo-label image and the second image.
  • the unstandardized distillation loss includes reconstruction loss and perceptual loss
  • the unstandardized distillation loss can be the sum of the two or a weighted sum, etc.
  • the first image processing model G S is trained based on the following steps, which may also include: according to the second image Determine the total variation loss; accordingly, based on the distillation loss Training the first image processing model G S may also include: according to the distillation loss and the total variation loss L tv , to train the first image processing model.
  • the spatial smoothness of the output image of the first image processing model G S can be improved by introducing the total variation loss L tv .
  • Three hyperparameters ⁇ fea , ⁇ style , and ⁇ tv can be used to achieve the balance between the above losses.
  • the overall unstandardized distillation loss It can be defined as follows:
  • ⁇ fea , ⁇ style , and ⁇ tv represent the weights of the feature reconstruction loss L fea , the style reconstruction loss L style and the total variation loss L tv respectively.
  • the images generated by the second image processing model G T during the training process may also include: the second image processing model G T
  • Figure 5 shows the first image processing model G S using the first image in an image processing method provided by an embodiment of the present disclosure.
  • the corresponding labeled sample x i is used as the input of the first image processing model G S ; through the first image processing model G S , a third image is generated based on the labeled sample x i According to the first image and third image Determine the distillation loss (can be called labeled distillation loss ); according to distillation loss right
  • the first image processing model G S is trained.
  • the labeled distillation loss The calculation process can refer to the standard-free distillation loss calculation process.
  • the labeled distillation loss It is also possible to include the first image and third image reconstruction loss and/or perception loss.
  • the perceptual loss may also include at least one of the following: feature reconstruction loss L fea and style reconstruction loss L style .
  • it can also be based on the labeled distillation loss and third image
  • the total variation loss L tv is used to train the first image processing model G S.
  • the total distillation loss L kd of the first image processing model can be defined as:
  • ⁇ unlabeled represents the proportion of labeled samples and unlabeled samples contributing to the loss value.
  • FIG. 6 is a general training framework diagram of the first image processing model in an image processing method provided by an embodiment of the present disclosure. Referring to Figure 6, during the training process of the first image processing model, collaborative compression of model dimensions and training data dimensions is achieved at the same time.
  • the second image processing model and the discriminator can be adversarially trained based on the true label data pair, and each time the second image processing model is iteratively optimized, the first image processing model can be guided to imitate the second image processing through the labeled distillation loss.
  • the optimization process of the model realizes the online distillation of the first image processing model.
  • the first image processing model may be called a student generator, and the second image processing model may be called a teacher generator.
  • the compression of model dimensions can be achieved through online distillation, which is beneficial to deploying small-scale, high-performance first image processing models in devices with limited resources.
  • the second image processing model can also obtain unlabeled samples to generate candidate pseudo-label images.
  • the discriminator can be used to filter candidate pseudo-label images to obtain high-quality pseudo-label images.
  • the filtered pseudo-label images may introduce a standard-free distillation loss to the first image processing model to train the first image processing model.
  • the embodiment of the present disclosure describes the training steps of the first image processing model.
  • the training process of the first image processing model not only the first image generated by the second image processing model based on labeled samples can be used as supervision information, but also the pseudo-labeled image generated by the second image processing model based on unlabeled samples can be used as supervision information. Supervise information and conduct training.
  • the labeled distillation loss for the first image and the unlabeled distillation loss for the pseudo-label image are used to calculate the first image. Training the image processing model can improve the generalization of the first image processing model and the quality of the generated images.
  • FIG. 7 is a schematic structural diagram of an image processing device provided by an embodiment of the present disclosure.
  • the image processing device provided in this embodiment can perform image processing based on the small-scale model after distilling the small-scale model through the large-scale model.
  • the image processing device provided by the embodiment of the present disclosure may include:
  • the image acquisition module 710 is configured to obtain the original image to be processed
  • the input module 720 is configured to input the original image into the first image processing model
  • the generation module 730 is configured to process the original image by the first image processing model to generate the target image; wherein the first image processing model and the second image processing model are alternately trained and generated online, and the supervision information during the training process of the first image processing model includes At least some of the images generated by the second image processing model during the training process, the model size of the first image processing model is smaller than the model size of the second image processing model;
  • the output module 740 is configured to output the target image.
  • the second image processing model is trained based on true label data pairs during the training process, and generates pseudo-label images based on unlabeled samples;
  • the image processing device may include a model training module; and the model training module , which can include a pseudo-label generation unit;
  • the discriminator filters candidate pseudo-label images to obtain the final pseudo-label image.
  • the pseudo-label generation unit can be set to:
  • the pseudo-label generation unit can be set to:
  • Candidate pseudo-label images are screened based on preset evaluation criteria and evaluation results.
  • the model training module may include a second image processing model training unit
  • the second image processing model training unit may be configured to perform adversarial training on the second image processing model and the discriminator based on true label data pairs;
  • the second image processing model training unit can be set to:
  • the discriminator determine whether the first image and the true label image are of the same type
  • the discriminator is trained with the goal of distinguishing different types by the discriminator.
  • the second image processing model training unit can also be set to:
  • the second image processing model is trained according to the reconstruction loss.
  • the model training module may include a first image processing model training unit
  • the first image processing model training unit can be configured to train the first image processing model based on the following steps:
  • the first image processing model is trained according to the distillation loss.
  • the first image processing model training unit can be set to:
  • the perceptual loss includes at least one of: feature reconstruction loss and style reconstruction loss.
  • the first image processing model training unit can also be set to:
  • training the first image processing model according to the distillation loss includes: training the first image processing model according to the distillation loss and the total variation loss.
  • the images generated by the second image processing model during the training process also include:
  • the first image processing model training unit may be configured to train the first image processing model based on the following steps:
  • the first image processing model is trained according to the distillation loss.
  • the image processing device provided by the embodiments of the present disclosure can execute the image processing method provided by any embodiment of the present disclosure, and has functional modules and beneficial effects corresponding to the execution method.
  • Terminal devices in embodiments of the present disclosure may include, but are not limited to, mobile phones, notebook computers, digital broadcast receivers, personal digital assistants (Personal Digital Assistant, PDA), tablet computers (PAD), portable multimedia players (Portable Media Player , PMP), mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals), and fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • PDA Personal Digital Assistant
  • PMP portable multimedia players
  • PMP Portable Media Player
  • mobile terminals such as vehicle-mounted terminals (such as vehicle-mounted navigation terminals)
  • fixed terminals such as digital televisions (Television, TV), desktop computers, etc.
  • the electronic device shown in FIG. 8 is only an example and should not impose any limitations on the functions and scope of use of the embodiments of the present disclosure.
  • the electronic device 800 may include a processing device (such as a central processing unit, a graphics processor, etc.) 801, which may process data according to a program stored in a read-only memory (Read-Only Memory, ROM) 802 or from a storage device. 808 loads the program in the random access memory (Random Access Memory, RAM) 803 to perform various appropriate actions and processing. In the RAM 803, various programs and data required for the operation of the electronic device 800 are also stored.
  • the processing device 801, ROM 802 and RAM 803 are connected to each other via a bus 804.
  • An input/output (I/O) interface 805 is also connected to bus 804.
  • the following devices can be connected to the I/O interface 805: input devices 806 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; including, for example, a Liquid Crystal Display (LCD) , an output device 807 such as a speaker, a vibrator, etc.; a storage device 808 including a magnetic tape, a hard disk, etc.; and a communication device 809.
  • the communication device 809 may allow the electronic device 800 to communicate wirelessly or wiredly with other devices to exchange data.
  • FIG. 8 illustrates an electronic device 800 having various means, it should be understood that implementation or availability of all illustrated means is not required. More or fewer means may alternatively be implemented or provided.
  • embodiments of the present disclosure include a computer program product including a computer program carried on a non-transitory computer-readable medium, the computer program containing program code for performing the method illustrated in the flowchart.
  • the computer program may be downloaded and installed from the network via communication device 809, or from storage device 808, or from ROM 802.
  • the processing device 801 When the computer program is executed by the processing device 801, the above-mentioned functions defined in the image processing method of the embodiment of the present disclosure are performed.
  • the electronic device provided by the embodiments of the present disclosure and the image processing method provided by the above embodiments belong to the same disclosed concept.
  • Technical details that are not described in detail in this embodiment can be referred to the above embodiments, and this embodiment has the same features as the above embodiments. beneficial effects.
  • Embodiments of the present disclosure provide a computer storage medium on which a computer program is stored.
  • the program is executed by a processor, the image processing method provided by the above embodiments is implemented.
  • the computer-readable medium mentioned above in the present disclosure may be a computer-readable signal medium or a computer-readable storage medium, or any combination of the above two.
  • the computer-readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor device or device, or any combination thereof.
  • Examples of computer readable storage media may include, but are not limited to: an electrical connection having one or more wires, a portable computer disk, a hard drive, random access memory (RAM), read only memory (ROM), erasable programmable read only memory Memory (Erasable Programmable Read-Only Memory, EPROM) or flash memory (FLASH), optical fiber, portable compact disk read-only memory (Compact Disc Read-Only Memory, CD-ROM), optical storage device, magnetic storage device, or any of the above The right combination.
  • a computer-readable storage medium may be any tangible medium that contains or stores a program for use by or in connection with instructions, apparatus, or devices.
  • a computer-readable signal medium may include a data signal propagated in baseband or as part of a carrier wave, carrying computer-readable program code therein. Such propagated data signals may take many forms, including but not limited to electromagnetic signals, optical signals, or any suitable combination of the above.
  • a computer-readable signal medium may also be any computer-readable medium other than a computer-readable storage medium that can send, propagate, or transport a program for execution by, or use in, an apparatus or device.
  • Program code contained on a computer-readable medium can be transmitted using any appropriate medium, including but not limited to: wires, optical cables, radio frequency (Radio Frequency, RF), etc., or any suitable combination of the above.
  • the client and server can communicate using any currently known or future developed network protocol such as HTTP (Hyper Text Transfer Protocol), and can communicate with digital data in any form or medium.
  • Data communications e.g., communications network
  • networks include local area networks (LAN), wide area networks (WAN), the Internet (eg, the Internet), and end-to-end networks (eg, ad hoc end-to-end networks), as well as any currently known or networks for future research and development.
  • the above-mentioned computer-readable medium may be included in the above-mentioned electronic device; it may also exist independently without being assembled into the electronic device.
  • the above-mentioned computer-readable medium carries one or more programs.
  • the electronic device executes the above-mentioned one or more programs.
  • the supervision information during the training process of an image processing model includes at least part of the images generated by the second image processing model during the training process.
  • the model scale of the first image processing model is smaller than the model scale of the second image processing model; the target image is output.
  • the storage medium may be a non-transitory storage medium.
  • Computer program code for performing the operations of the present disclosure may be written in one or more programming languages, including but not limited to object-oriented programming languages—such as Java, Smalltalk, C++, and Includes conventional procedural programming languages—such as "C” or similar programming languages.
  • the program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server.
  • the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (such as an Internet service provider through Internet connection).
  • LAN local area network
  • WAN wide area network
  • Internet service provider such as an Internet service provider through Internet connection
  • each block in the flowchart or block diagram may represent a module, segment, or portion of code that contains one or more logic functions that implement the specified executable instructions.
  • the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown one after another may actually execute substantially in parallel, or they may sometimes execute in the reverse order, depending on the functionality involved.
  • each block in the block diagram and/or flowchart illustration, and combinations of blocks in the block diagram and/or flowchart illustration can be implemented by special purpose hardware-based hardware that performs the specified functions or operations. Or it can be implemented using a combination of specialized hardware and computer instructions.
  • the units involved in the embodiments of the present disclosure can be implemented in software or hardware. Among them, the name of a unit or module does not constitute a limitation on the unit or module itself under certain circumstances.
  • exemplary types of hardware logic components include: field programmable gate array (Field Programmable Gate Array, FPGA), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), application specific standard product (Application Specific Standard Parts (ASSP), System on Chip (SOC), Complex Programmable Logic Device (CPLD), etc.
  • a machine-readable medium may be a tangible medium that may contain or store a program for use by or in connection with the execution of instructions, apparatus, or device.
  • the machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium.
  • Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor devices, devices or devices, or any suitable combination of the foregoing.
  • machine-readable storage media examples include one or more wire-based electrical connections, laptop disks, hard drives, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) ) or flash memory, optical fiber, portable compact disk read-only memory (CD-ROM), optical storage device, magnetic storage device, or any suitable combination of the foregoing.
  • RAM random access memory
  • ROM read only memory
  • EPROM erasable programmable read only memory
  • flash memory optical fiber
  • CD-ROM portable compact disk read-only memory
  • magnetic storage device or any suitable combination of the foregoing.
  • an image processing method which method includes:
  • the first image processing model and the second image processing model are generated through online alternate training, and the supervision information during the training process of the first image processing model includes at least part of the images generated by the second image processing model during the training process.
  • the model scale of the first image processing model is smaller than the model scale of the second image processing model;
  • an image processing method further comprising:
  • the second image processing model is trained based on true label data pairs during the training process, and generates pseudo-label images based on unlabeled samples, and the pseudo-label images are generated based on the following steps:
  • the candidate pseudo-label images are screened by the discriminator to obtain the final pseudo-label image.
  • an image processing method further comprising:
  • obtaining unlabeled samples as input to the second image processing model includes:
  • an image processing method further comprising:
  • filtering the candidate pseudo-label images by the discriminator includes:
  • the candidate pseudo-label images are screened according to the preset evaluation criteria and the evaluation results.
  • an image processing method further comprising:
  • the second image processing model and the discriminator perform adversarial training based on the true label data pair, including:
  • the discriminator is trained with the goal of distinguishing different types by the discriminator.
  • an image processing method further comprising:
  • the second image processing model is trained based on the reconstruction loss.
  • an image processing method further comprising:
  • the first image processing model uses the pseudo-label image as supervision information
  • the first image processing model is trained based on the following steps:
  • the first image processing model is trained based on the distillation loss.
  • an image processing method further comprising:
  • determining a distillation loss based on the pseudo-label image and the second image includes:
  • an image processing method further comprising:
  • the perceptual loss includes at least one of: feature reconstruction loss and style reconstruction loss.
  • an image processing method further comprising:
  • the first image processing model is trained based on the following steps, further including:
  • training the first image processing model according to the distillation loss includes: training the first image processing model according to the distillation loss and the total variation loss.
  • an image processing method further comprising:
  • the images generated by the second image processing model during the training process also include:
  • the first image processing model uses the first image as supervision information
  • the first image processing model is trained based on the following steps:
  • the first image processing model is trained based on the distillation loss.
  • the first image processing model is applied to lightweight terminal devices.
  • an image processing device which device includes:
  • the image acquisition module is configured to obtain the original image to be processed
  • An input module configured to input the original image into the first image processing model
  • a generation module configured to process the original image by the first image processing model to generate a target image; wherein the first image processing model and the second image processing model are alternately trained and generated online, and the first image processing model
  • the supervision information during the training process includes at least part of the images generated by the second image processing model during the training process, and the model size of the first image processing model is smaller than the model size of the second image processing model;
  • An output module is configured to output the target image.

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

本公开实施例公开了一种图像处理方法、装置、电子设备及存储介质,方法包括:获得待处理的原始图像;将原始图像输入第一图像处理模型;由第一图像处理模型处理原始图像以生成目标图像;第一图像处理模型和第二图像处理模型在线交替训练生成,第一图像处理模型训练过程中的监督信息包括第二图像处理模型在训练过程中生成的至少部分图像,第一图像处理模型的模型规模小于第二图像处理模型的模型规模;输出目标图像。

Description

图像处理方法、装置、电子设备及存储介质
本申请要求在2022年07月22日提交中国专利局、申请号为202210873377.2的中国专利申请的优先权,以上申请的全部内容通过引用结合在本申请中。
技术领域
本公开实施例涉及图像处理技术领域,例如涉及一种图像处理方法、装置、电子设备及存储介质。
背景技术
一些图像生成器可以根据一种图像域的图像,生成另一种图像域的图像。例如可以根据低分辨率图像生成高分辨率图像等。其独特的图像生成能力具有广泛的应用场景。
发明内容
本公开实施例提供了一种图像处理方法、装置、电子设备及存储介质。
第一方面,本公开实施例提供了一种图像处理方法,包括:
获得待处理的原始图像;
将所述原始图像输入第一图像处理模型;
由所述第一图像处理模型处理所述原始图像以生成目标图像;
其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
输出所述目标图像。
第二方面,本公开实施例还提供了一种图像处理装置,包括:
图像获得模块,设置为获得待处理的原始图像;
输入模块,设置为将所述原始图像输入第一图像处理模型;
生成模块,设置为由所述第一图像处理模型处理所述原始图像以生成目标图像;其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
输出模块,设置为输出所述目标图像。
第三方面,本公开实施例还提供了一种电子设备,所述电子设备包括:
一个或多个处理器;
存储装置,设置为存储一个或多个程序,
当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如本公开实施例任一所述的图像处理方法。
第四方面,本公开实施例还提供了一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如本公开实施例任一所述的图像处理方法。
附图说明
贯穿附图中,相同或相似的附图标记表示相同或相似的元素。应当理解附图是示意性的,原件和元素不一定按照比例绘制。
图1为本公开实施例所提供的一种图像处理方法的流程示意图;
图2为本公开实施例所提供的一种图像处理方法中伪标签图像的生成流程示意图;
图3为本公开实施例所提供的一种图像处理方法中第二图像处理模型的训练框架示意图;
图4为本公开实施例所提供的一种图像处理方法中第一图像处理模型以伪标签图像为监督信息时的训练框架示意图;
图5为本公开实施例所提供的一种图像处理方法中第一图像处理模型以第一图像为监督信息时的训练框架示意图;
图6为本公开实施例所提供的一种图像处理方法中第一图像处理模型的总训练框架图;
图7为本公开实施例所提供的一种图像处理装置的结构示意图;
图8为本公开实施例所提供的一种电子设备的结构示意图。
具体实施方式
图像生成器的训练需要大量高质量的成对数据,来引导网络学习不同图像域之间的映射关系。然而,成对图像制作成本极高,需要根据修图指令逐张修图得到,导致训练数据制作成本较高。
为应对上述情况,本公开实施例提供了一种图像处理方法、装置、电子设备 及存储介质。
下面将参照附图描述本公开的实施例。虽然附图中显示了本公开的某些实施例,然而应当理解的是,本公开可以通过各种形式来实现,而且不应该被解释为限于这里阐述的实施例,相反提供这些实施例是为了更加透彻和完整地理解本公开。应当理解的是,本公开的附图及实施例仅用于示例性作用,并非用于限制本公开的保护范围。
应当理解,本公开的方法实施方式中记载的各个步骤可以按照不同的顺序执行,和/或并行执行。此外,方法实施方式可以包括附加的步骤和/或省略执行示出的步骤。本公开的范围在此方面不受限制。
本文使用的术语“包括”及其变形是开放性包括,即“包括但不限于”。术语“基于”是“至少部分地基于”。术语“一个实施例”表示“至少一个实施例”;术语“另一实施例”表示“至少一个另外的实施例”;术语“一些实施例”表示“至少一些实施例”。其他术语的相关定义将在下文描述中给出。
需要注意,本公开中提及的“第一”、“第二”等概念仅用于对不同的装置、模块或单元进行区分,并非用于限定这些装置、模块或单元所执行的功能的顺序或者相互依存关系。
需要注意,本公开中提及的“一个”、“多个”的修饰是示意性而非限制性的,本领域技术人员应当理解,除非在上下文另有明确指出,否则应该理解为“一个或多个”。
可以理解的是,本公开实施例所涉及的数据(包括但不限于数据本身、数据的获取或使用)应当遵循相应法律法规及相关规定的要求。
图1为本公开实施例所提供的一种图像处理方法的流程示意图。本公开实施例可以通过大规模模型对小规模模型进行蒸馏后,基于小规模模型进行图像处理。该方法可以由图像处理装置来执行,该装置可以通过软件和/或硬件的形式实现,该装置可配置于电子设备中,例如配置于计算机设备中。
如图1所示,本实施例提供的图像处理方法,可以包括:
S110、获得待处理的原始图像;
S120、将原始图像输入第一图像处理模型;
S130、由第一图像处理模型处理原始图像以生成目标图像;其中,第一图像处理模型和第二图像处理模型在线交替训练生成,第一图像处理模型训练过程中的监督信息包括第二图像处理模型在训练过程中生成的至少部分图像,第一图像处理模型的模型规模小于第二图像处理模型的模型规模;
S140、输出目标图像。
本公开实施例中,图像处理方法可以指根据一种图像域的原始图像,生成另一种图像域的目标图像的方法。根据原始图像生成目标图像的过程,可以由第一图像处理模型执行。
其中,第一图像处理模型和第二图像处理模型可以为生成式对抗网络(Generative Adversarial Networks,GAN)中的生成器,或者为其他能够根据一种图像域的图像,生成另一种图像域的生成器。第一图像处理模型的模型规模小于第二图像处理模型的模型规模,可以指第一图像处理模型的模型宽度(又可称为模型的通道数)小于第二图像处理模型的模型宽度;和/或,第一图像处理模型的模型深度(又可称为模型的网络层数)小于第二图像处理模型的模型深度。可以认为,第一图像处理模型为规模较小的简单模型,第二图像处理模型为规模较大的复杂模型。
在基于相同真标签数据对训练的情况下,规模较大的模型的训练效果通常优于规模较小的模型。即在基于相同真标签数据对训练时,第二图像处理模型的训练效果会优于第一图像处理模型。通过使第一图像处理模型以第二图像处理模型生成的至少部分图像为监督信息,进行训练(即通过第二图像处理模型对第一图像处理模型进行模型蒸馏),可以使第一图像处理模型性能向第二图像处理模型靠拢。
其中,第一图像处理模型可仅利用第二图像处理模型进行优化,即第一图像处理模型的监督信息可均来自第二图像处理模型。例如,第一图像处理模型可以利用第二图像处理模型基于有标签样本生成的图像,以及基于无标签样本生成的至少部分伪标签图像作为监督信息,进行优化。在这种训练方式下,可称第二图像处理模型为教师生成器,第一图像处理模型为学生生成器。通过进行模型蒸馏,可实现模型规模的压缩,有利于将小规模、性能佳的第一图像处理模型,部署于资源有限的设备中。
若在第二图像处理模型训练完成后,再以第二图像处理模型的生成图像作为监督信息,对第一图像处理模型进行训练,则第一图像处理模型完全从零开始向训练完成的第二图像处理模型靠拢,这将花费较长的耗时、较大的计算量,使第一图像处理模型训练到与第二图像处理模型的性能相当的程度。
有鉴于此,本公开实施例中,第一图像处理模型和第二图像处理模型可以在线交替训练生成。该在线交替训练过程,例如可包括:第二图像处理模型基于当前轮次获取的真标签数据对进行训练;第一图像处理模型以当前轮次训练后的 第二图像处理模型生成的图像为监督信息,模仿第二图像处理模型在当前轮次的训练过程进行训练。其中,真标签数据对为,由有标签样本和真标签图像构成的数据对;其中有标签样本与原始图像的图像域相同,真标签图像与目标图像的图像域相同。
可以认为,在第二图像处理模型训练过程中,每经过一次迭代训练,都对第一图像处理模型进行一次蒸馏,以使第一图像处理模型可以渐进式地跟随第二图像处理模型训练。这种渐进式地交替训练,可利用很小的计算量完成第二图像处理模型到第一图像处理模型的模型蒸馏。本公开实施例中,可将这种渐进式地交替训练方式称为在线蒸馏。在实际应用中,相较于在第二图像处理模型训练完成后对第一图像处理模型进行蒸馏,通过在线蒸馏的方式训练第一图像处理模型,能够降低30%的模型计算量。
本公开实施例中,第二图像处理模型的训练过程,可以指第二图像处理模型从开始训练到训练完成的整个过程。在第二图像处理模型的训练过程中,其除了可根据真标签数据对训练外,还可根据无标签样本生成伪标签图像;其中,无标签样本与原始样本的图像域相同,伪标签图像与目标图像的图像域相同。也就是说,第一图像处理模型除了跟随第二图像处理模型的每轮迭代训练外,还可基于至少部分伪标签图像进行独自的训练。从而,能够在无需付出额外成对数据的制作成本的情况下,扩增第一图像处理模型的训练数据量,提升第一图像处理模型的泛化性。
虽然伪标签图像可为第一图像处理模型提供监督信息,但是并不能为第二图像处理模型提供监督信息。即,可基于伪标签图像训练第一图像处理模型,但并不能基于伪标签图像训练第二图像处理模型。第一图像处理模型基于伪标签图像进行独自的训练后,性能可能出现偏差。因此,可基于部分质量较佳的伪标签图像进行独自训练,以减少这种性能偏差。但是,若基于全部伪标签图像进行训练,则由于本公开实施例中的伪标签图像,在第二图像处理模型训练过程中生成,当第二图像处理模型生成伪标签图像后再次基于真标签数据对训练时,第一图像处理模型将继续跟随第二图像处理模型训练。因此,也可及时矫正伪标签对第一图像处理模型引入的偏差,也可保证第一图像处理模型的图像生成质量,达到降本增效的效果。由此获得的第一图像处理模型在实现轻量化的同时得以确保优良的图像处理性能。
本公开实施例中,第一图像处理模型可应用于轻量化的终端设备。其中,轻量化的终端设备,可认为是资源有限的终端设备,例如手机等终端设备。由于训 练完成的第一图像处理模型,其模型规模小,且模型泛化性和生成图像质量较佳,有效降低了模型在资源受限的移动终端设备或其他轻量级的物联网设备上的部署难度。通过训练完成的第一图像处理模型,可根据原始图像生成高质量的目标图像。
本公开实施例,实现了GAN训练过程中,模型维度和训练数据维度的协同压缩,如下:
通过在线蒸馏方式对第二图像处理模型和第一图像处理模型进行交替训练,可逐步引导第一图像处理模型渐进式地学习第二图像处理模型的优化过程,使得第一图像处理模型能够以更小的计算量输出质量与第二图像处理模型近似的图像,完成了模型维度的压缩。
通过第二图像处理模型在训练过程中,引入大量无标签样本生成伪标签图像,并基于至少部分伪标签图像对第一图像处理模型进行训练,能够将传统基于真标签数据对进行训练的方式,转化为基于真标签数据对协同伪标签图像进行训练的方式,从而完成了对真标签数据对需求量的压缩,即完成了训练数据维度的压缩。伪标签图像能够为第一图像处理模型的训练带来额外的监督信息,在无需付出额外成对数据的制作成本的情况下,扩增第一图像处理模型的训练数据量,提升第一图像处理模型的泛化性,更有利于模型学习到待生成图像的图像域的结构性特征。
本公开实施例,第一图像处理模型为规模较小的模型,第二图像处理模型为规模较大的模型。在基于相同真标签数据对训练的情况下,规模较大的模型的训练效果通常优于规模较小的模型。第一图像处理模型和第二图像处理模型在线交替训练,且第一图像处理模型以第二图像处理模型训练过程中生成的图像为监督信息进行训练,可使第一图像处理模型模仿第二图像处理模型每轮迭代训练过程,逐步跟随训练。通过逐步跟随训练,可实现以很小的计算量完成第二图像处理模型到第一图像处理模型的模型蒸馏,使第一图像处理模型性能向第二图像处理模型靠拢。
第二图像处理模型在训练过程中,除了可根据真实标签数据对训练外,还可根据无标签样本生成伪标签图像。即第一图像处理模型除了跟随第二图像处理模型的每轮迭代训练过程外,还可基于至少部分伪标签图像进行独自的训练,能够在无需付出额外成对数据的制作成本的情况下,扩增第一图像处理模型的训练数据量,提升第一图像处理模型的泛化性。
由于伪标签图像可为第一图像处理模型提供训练的监督信息,但不能为第 二图像处理模型提供训练的监督信息,第一图像处理模型基于伪标签图像训练后性能可能出现偏差。但是,由于伪标签图像为第二图像处理模型训练过程中生成的,当第二图像处理模型生成伪标签图像后再次基于真标签数据对训练时,第一图像处理模型将继续跟随第二图像处理模型训练。因此,可及时矫正伪标签对第一图像处理模型引入的偏差,保证第一图像处理模型的图像生成质量,达到降本增效的效果。
本公开实施例与上述实施例中所提供的图像处理方法中各个示例可以结合。第二图像处理模型可在训练过程中基于真标签数据对进行训练,且根据无标签样本生成伪标签图像。本实施例所提供的图像处理方法中,对伪标签图像的生成过程进行了描述。通过利用基于当前真标签数据训练后的第二图像处理模型,根据无标签数据生成伪标签数据,能够实现为第一图像处理模型的训练提供额外的监督信息。
图2为本公开实施例所提供的一种图像处理方法中伪标签图像的生成流程示意图。如图2所示,本实施例提供的图像处理方法中,伪标签图像可以基于下述步骤生成:
S210、在第二图像处理模型和判别器基于真标签数据对进行对抗训练的过程中,获取无标签样本作为第二图像处理模型的输入。
本公开实施例中,第二图像处理模型可以为GAN中的生成器,且可与GAN中的判别器一起进行对抗训练。示例性的,图3为本公开实施例所提供的一种图像处理方法中第二图像处理模型的训练框架示意图。参见图3,在一些实现方式中,第二图像处理模型和判别器基于真标签数据对进行对抗训练,可以包括:
获取真标签数据对中的有标签样本xi,作为第二图像处理模型GT的输入;通过第二图像处理模型GT,根据有标签样本xi生成第一图像获取真标签数据对中,与有标签样本xi对应的真标签图像yi;通过判别器D,判别第一图像与真标签图像yi是否为相同类型;以判别器判别为相同类型为目标,对第二图像处理模型进行训练;以判别器判别为不同类型为目标,对判别器进行训练。
i为正整数,N表示真标签数据对的数量。
其中,真标签数据对被用于监督第二图像处理模型GT的训练。其中,可利用生成对抗损失LGAN(GT,D)来训练第二图像处理模型GT和判别器D。第二图像处理模型GT被训练为将xi映射到yi,而判别器D被训练为将GT生成的图像pt与真标签图像yi区分开。其中,生成对抗损失可以用如下公式表示:
其中,x为各有标签样本,y为各真标签图像,GT(x)为第二图像处理模型根据各有标签样本生成的各第一图像可表示在数据(x,y)下的期望函数,可表示在数据x下的期望函数。再次参见图3,在第二图像处理模型GT和判别器D基于真标签数据对 进行对抗训练的过程中,还可以包括:根据第一图像与真标签图像y,确定重建损失Lrecon;根据重建损失Lrecon,对第二图像处理模型GT进行训练。其中,重建损失Lrecon可以用如下公式表示:其中,相同标识的含义可参见上文。通过根据重建损失训练第二图像处理模型,能够使得第二图像处理模型的输出和真实标签图像接近。此时,第二图像处理模型GT完整的优化损失函数可以用如下公式表示:在第二图像处理模型和判别器基于真标签数据对进行对抗训练的过程中,获取无标签样本作为第二图像处理模型的输入,例如可以包括:在基于真标签数据对进行对抗训练的间隔,获取无标签样本作为第二图像处理模型的输入。其中,获取无标签样本例如可以为,从无标签样本集中随机抽取无标签样本。其中,每次获取的无标签样本的数量可以为至少一个。由于获取无标签样本发生在对抗训练的间隔,即第二图像处理模型在生成候选伪标签后可继续进行对抗训练。也就是说,第一图像处理模型根据伪标签图像训练后,可继续模仿第二图像处理模型的优化过程,从而不仅能够提升第一图像处理模型的泛化性,还可以弥补伪标签图像可能引入的偏差,保证第一图像处理模型的性能。在一些实现方式中,还可以在基于预定比例(例如三分之一、二分之一等)的真标签数据对进行连续的对抗训练后,在基于剩余比例的真标签数据对进行对抗训练前的间隔期,获取无标签样本作为第二图像处理模型的输入等。通过先根据预定比例的真标签数据对进行预热训练,可以使第二图像处理模型生成的伪标签图像更符合待生成图像的图像域的结构性特征,在一定程度上能够为第一图像处理模型提供更佳的监督信息。在一些实现方式中,在第二图像处理模型和判别器基于真标签数据对进行对抗训练的过程中,获取无标签样本作为第二图像处理模型的输入,可以包括: 每获取预设数量的真标签数据对,对第二图像处理模型和判别器进行对抗训练后,获取无标签样本作为第二图像处理模型的输入。
在这些实现方式中,基于真标签数据对进行对抗训练的间隔获取无标签样本,可以是每获取预设数量的(例如1个、2个等)真标签数据对进行对抗训练后,获取无标签样本。其中,预设数量在一定程度上与第一图像处理模型的训练数据的压缩程度呈反相关。例如,若每获取1个真标签数据对进行对抗训练后,获取1个无标签样本以生成候选伪标签图像,则在候选伪标签图像皆用于训练第一图像处理模型的情况下,可为第一图像处理模型压缩50%的真标签数据对的需求量;若每获取2个真标签数据对进行对抗训练后,获取1个无标签样本以生成候选伪标签图像,则在候选伪标签图像皆用于训练第一图像处理模型的情况下,可为第一图像处理模型压缩33%的真标签数据对的需求量。另一方面,预设数量减小,也可能为第一图像处理模型模拟第二图像处理模型的训练过程引入稍大的计算量。因此,可根据实际情况设置预设数量的大小,以平衡第一图像处理模型的训练数据压缩量和训练计算量。
S220、通过第二图像处理模型,根据无标签样本生成候选伪标签图像。
本实施例中,第二图像处理模型可根据训练中的当前各模型参数,根据无标签样本生成候选伪标签。且第二图像处理模型在根据获取的无标签样本生成伪标签图像后,还可以继续基于真标签数据对进行对抗训练。以根据真标签数据对提供的监督信息,继续优化第二图像处理模型,使第一图像处理模型继续模仿第二图像处理模型的优化过程。
S230、通过判别器对候选伪标签图像进行筛选,得到最终的伪标签图像。
将从无标签样本集中获取的无标签样本输入第二图像处理模型GT后,可通过第二图像处理模型GT生成候选伪标签图像因为第二图像处理模型GT并没有获取过相关的监督信息,所以不同对应的质量参差不齐。
j为正整数,M表示无标签样本的数量。
因为在第二图像处理模型GT的训练过程中,存在判别器D与之进行相互对抗,所以判别器D可以很好的评判出当前第二图像处理模型GT的生成图像的质量。对于各候选伪标签图像如果判别器D认为该图接近真实图像,则质量较高,如果判别器D可以辨别出并非真实图像,则质量较低。
本实施例中,可通过判别器从大量候选伪标签中选取图像质量较高的伪标签图像,以送入第一图像处理模型进行训练。其中,可以将筛选的高质量伪标签图像即时输入第一图像处理模型进行训练;也可以非即时输入第一图像处理模型进行训练。例如,可以在累积一定数量的伪标签图像后,一次性输入第一图像处理模型进行训练等。在此不对伪标签图像的输入时机进行严格限定,其他将伪标签图像输入第一图像处理模型的方式也可以应用于此,在此不做穷举。
在一些实现方式中,通过判别器对候选伪标签图像进行筛选,可以包括:通过判别器对候选伪标签图像进行真实性评估,得到评估结果;根据预设评估标准以及评估结果,对候选伪标签图像进行筛选。
其中,利用判别器D对候选伪标签图像的图像进行真实性评估,可得到评估分数其中,评估标准可以为预先设置的阈值λthre。根据λthre对候选伪标签图像进行筛选,可以包括:将超过λthre的候选伪标签图像作为输入第一图像处理模型的伪标签图像,将低于λthre的候选伪标签图像丢弃不用,从而在扩增第一图像处理模型的训练数据量的同时保证了训练数据的质量。
在这些实现方式中,为保证伪标签图像的质量,可使用判别器对候选伪标签图像进行数据筛选,选取的高质量伪标签图像可提升学生生成器的泛化性。该筛选方式有助于挖掘同一风格下无标签样本的结构化特征,可以与真标签数据对形成互补,来训练第一图像处理模型,从而减轻了昂贵且耗时的训练数据生成和挑选环节,起到降本增效的作用。
本公开实施例,对伪标签图像的生成过程进行了描述。通过利用基于当前真标签数据训练后的第二图像处理模型,根据无标签数据生成伪标签数据,能够实现为第一图像处理模型的训练提供额外的监督信息。本公开实施例提供的图像处理方法与上述实施例提供的图像处理方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的有益效果。
本公开实施例与上述实施例中所提供的图像处理方法中各个示例可以结合。本实施例所提供的图像处理方法,对第一图像处理模型的训练步骤进行了描述。在第一图像处理模型的训练过程中,不仅可以以第二图像处理模型根据有标签样本生成的第一图像作为监督信息,还可以以第二图像处理模型根据无标签样 本生成的至少部分伪标签图像作为监督信息,进行训练。通过第一图像处理模型和第二图像处理模型间针对第一图像的有标蒸馏损失,以及针对伪标签图像的无标蒸馏损失,对第一图像处理模型进行训练,能够提升第一图像处理模型的泛化性与生成图像的质量。
示例性的,图4为本公开实施例所提供的一种图像处理方法中第一图像处理模型以伪标签图像为监督信息时的训练框架示意图。参见图4,在一些实现方式中,当第一图像处理模型以伪标签图像为监督信息时,第一图像处理模型基于下述步骤训练:
获取伪标签图像对应的无标签样本作为第一图像处理模型GS的输入;通过第一图像处理模型GS,根据无标签样本生成第二图像根据伪标签图像和第二图像确定蒸馏损失(可称为无标蒸馏损失);根据蒸馏损失对第一图像处理模型进行训练。
其中,可以将伪标签图像和第二图像之间的重建损失作为蒸馏损失和/或,在一些实现方式中,根据伪标签图像和第二图像确定蒸馏损失,可以包括:根据第二图像处理模型GT生成伪标签图像过程中的特征图像以及第一图像处理模型GS生成第二图像过程中的特征图像确定感知损失Lpreu;将感知损失Lpreu作为蒸馏损失
其中,使用感知损失Lpreu来度量伪标签图像和第二图像之间的差异时,感知损失Lpreu可以包括下述至少一项:特征重建损失Lfea和样式重建损失Lstyle
特征重建损失Lfea可鼓励具有相似的特征表示,这些特征表示可由预先训练的网络的φ度量得到,例如可由超分辨率测试序列网络(Visual Geometry Group,VGG)的φ度量得到。其中,特征重建损失Lfea可定义如下:
其中,φj(x)表示x在VGG网络的第j层的激活值(即特征图像);||·||1表示一维范数;Cj×Hj×Wj表示φj(x)的维数。
Cj表示通道(channel)数,Hj表示高(height),Wj表示宽(width)。
样式重建损失Lstyle被引入来惩罚在样式特征上的差异,例如在颜色,纹理,通用图案等方面的差异。其中,样式重建损失Lstyle可被定义为:
式中,表示x在VGG网络中第j层激活值经格拉姆矩阵抽取后的特征。
可以认为,无标蒸馏损失可包括伪标签图像和第二图像之间的重建损失和/或感知损失。当无标蒸馏损失包括重建损失和感知损失时,无标蒸馏损失可以为两者之和或加权和等。
此外,在一些实现方式中,第一图像处理模型GS基于下述步骤训练,还可以包括:根据第二图像确定总变差损失;相应的,根据蒸馏损失对第一图像处理模型GS进行训练,还可以包括:根据蒸馏损失和总变差损失Ltv,对第一图像处理模型进行训练。
其中,通过引入总变差损失Ltv可提高第一图像处理模型GS输出图像的空间平滑度。可使用三个超参数λfea、λstyle、λtv来实现上述损失之间的平衡,此时总体的无标蒸馏损失可定义如下:
其中,λfea、λstyle、λtv分别表示特征重建损失Lfea、样式重建损失Lstyle和总变差损失Ltv的权重。
在一些实现方式中,第二图像处理模型GT在训练过程生成的图像,还可以包括:第二图像处理模型GT根据真标签数据对中的有标签样本xi,生成的第一图像示例性的,图5为本公开实施例所提供的一种图像处理方法中第一图像处理模型GS以第一图像为监督信息时的训练框架示意图。参见图5,当第一图像处理模型GS以第一图像为监督信息时,第一图像处理模型GS基于下述步骤训练:
获取第一图像对应的有标签样本xi,作为第一图像处理模型GS的输入;通过第一图像处理模型GS,根据有标签样本xi生成第三图像根据第一图像和第三图像确定蒸馏损失(可称为有标蒸馏损失);根据蒸馏损失对 第一图像处理模型GS进行训练。
其中,有标蒸馏损失的计算过程可参考无标蒸馏损失的计算过程。其中,有标蒸馏损失也可以包括第一图像和第三图像间的重建损失和/或感知损失。感知损失也可以包括下述至少一项:特征重建损失Lfea和样式重建损失Lstyle。此外,也可以根据有标蒸馏损失以及第三图像的总变差损失Ltv对第一图像处理模型GS进行训练。
当第一图像处理模型GS训练过程中,既包含有标蒸馏损失又包含无标蒸馏损失时,第一图像处理模型总的蒸馏损失Lkd可以定义为:
其中,λunlabeled表示有标签样本与无标签样本对损失值贡献的比例。
示例性的,图6为本公开实施例所提供的一种图像处理方法中第一图像处理模型的总训练框架图。参见图6,在第一图像处理模型训练过程中,同时实现了模型维度和训练数据维度的协同压缩。
参见图6,第二图像处理模型和判别器可基于真标签数据对进行对抗训练,且第二图像处理模型每经过迭代优化,可通过有标蒸馏损失引导第一图像处理模型模仿第二图像处理模型的优化过程,实现第一图像处理模型的在线蒸馏。其中,第一图像处理模型可称为学生生成器,第二图像处理模型可称为教师生成器。通过在线蒸馏可实现模型维度的压缩,有利于将小规模、性能佳的第一图像处理模型,部署于资源有限的设备中。
第二图像处理模型在对抗训练过程中,还可以获取无标签样本生成候选伪标签图像。并且,可通过判别器对候选伪标签图像进行筛选,得到高质量的伪标签图像。筛选后的伪标签图像可以为第一图像处理模型引入无标蒸馏损失,以训练第一图像处理模型。通过生成伪标签图像,能够在无需付出额外成对数据的制作成本的情况下,扩增第一图像处理模型的训练数据量,从而实现训练数据维度的压缩,可提升第一图像处理模型的泛化性。
本公开实施例,对第一图像处理模型的训练步骤进行了描述。在第一图像处理模型的训练过程中,不仅可以以第二图像处理模型根据有标签样本生成的第一图像作为监督信息,还可以以第二图像处理模型根据无标签样本生成的伪标签图像作为监督信息,进行训练。通过第一图像处理模型和第二图像处理模型针间对第一图像的有标蒸馏损失,以及针对伪标签图像的无标蒸馏损失,对第一图 像处理模型进行训练,能够提升第一图像处理模型的泛化性与生成图像的质量。本公开实施例提供的图像处理方法与上述实施例提供的图像处理方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且相同的技术特征在本实施例与上述实施例中具有相同的有益效果。
图7为本公开实施例所提供的一种图像处理装置的结构示意图。本实施例提供的图像处理装置可以通过大规模模型对小规模模型进行蒸馏后,基于小规模模型进行图像处理。
如图7所示,本公开实施例提供的图像处理装置,可以包括:
图像获得模块710,设置为获得待处理的原始图像;
输入模块720,设置为将原始图像输入第一图像处理模型;
生成模块730,设置为由第一图像处理模型处理原始图像以生成目标图像;其中,第一图像处理模型和第二图像处理模型在线交替训练生成,第一图像处理模型训练过程中的监督信息包括第二图像处理模型在训练过程中生成的至少部分图像,第一图像处理模型的模型规模小于第二图像处理模型的模型规模;
输出模块740,设置为输出目标图像。
在一些实现方式中,第二图像处理模型在训练过程中基于真标签数据对进行训练,且根据无标签样本生成伪标签图像;相应的,图像处理装置,可以包括模型训练模块;且模型训练模块,可以包括伪标签生成单元;
伪标签生成单元,可以设置为基于下述步骤生成伪标签图像:
在第二图像处理模型和判别器基于真标签数据对进行对抗训练的过程中,获取无标签样本作为第二图像处理模型的输入;
通过第二图像处理模型,根据无标签样本生成候选伪标签图像;
通过判别器对候选伪标签图像进行筛选,得到最终的伪标签图像。
在一些实现方式中,伪标签生成单元,可以设置为:
每获取预设数量的真标签数据对,对第二图像处理模型和判别器进行对抗训练后,获取无标签样本作为第二图像处理模型的输入。
在一些实现方式中,伪标签生成单元,可以设置为:
通过判别器对候选伪标签图像进行真实性评估,得到评估结果;
根据预设评估标准以及评估结果,对候选伪标签图像进行筛选。
在一些实现方式中,模型训练模块,可以包括第二图像处理模型训练单元;
第二图像处理模型训练单元,可以设置为基于真标签数据对,对第二图像处理模型和判别器进行对抗训练;
第二图像处理模型训练单元,可设置为:
获取真标签数据对中的有标签样本,作为第二图像处理模型的输入;
通过第二图像处理模型,根据有标签样本生成第一图像;
获取真标签数据对中,与有标签样本对应的真标签图像;
通过判别器,判别第一图像与真标签图像是否为相同类型;
以判别器判别为相同类型为目标,对第二图像处理模型进行训练;
以判别器判别为不同类型为目标,对判别器进行训练。
在一些实现方式中,第二图像处理模型训练单元,还可以设置为:
根据第一图像与真标签图像,确定重建损失;
根据重建损失,对第二图像处理模型进行训练。
在一些实现方式中,模型训练模块,可以包括第一图像处理模型训练单元;
当第一图像处理模型以伪标签图像为监督信息时,第一图像处理模型训练单元,可设置为基于下述步骤训练第一图像处理模型:
获取伪标签图像对应的无标签样本,作为第一图像处理模型的输入;
通过第一图像处理模型,根据无标签样本生成第二图像;
根据伪标签图像和第二图像,确定蒸馏损失;
根据蒸馏损失,对第一图像处理模型进行训练。
在一些实现方式中,第一图像处理模型训练单元,可设置为:
根据第二图像处理模型生成伪标签图像过程中的特征图像,以及第一图像处理模型生成第二图像过程中的特征图像,确定感知损失;
将感知损失作为蒸馏损失。
在一些实现方式中,感知损失包括下述至少一项:特征重建损失和样式重建损失。
在一些实现方式中,第一图像处理模型训练单元,还可设置为:
根据第二图像确定总变差损失;
相应的,根据蒸馏损失,对第一图像处理模型进行训练,包括:根据蒸馏损失和总变差损失,对第一图像处理模型进行训练。
在一些实现方式中,第二图像处理模型在训练过程生成的图像,还包括:
第二图像处理模型根据真标签数据对中的有标签样本,生成的第一图像;
当第一图像处理模型以第一图像为监督信息时,第一图像处理模型训练单元,可设置为基于下述步骤训练第一图像处理模型:
获取第一图像对应的有标签样本,作为第一图像处理模型的输入;
通过第一图像处理模型,根据有标签样本生成第三图像;
根据第一图像和第三图像,确定蒸馏损失;
根据蒸馏损失,对第一图像处理模型进行训练。
本公开实施例所提供的图像处理装置,可执行本公开任意实施例所提供的图像处理方法,具备执行方法相应的功能模块和有益效果。
值得注意的是,上述装置所包括的各个单元和模块只是按照功能逻辑进行划分的,但并不局限于上述的划分,只要能够实现相应的功能即可;另外,各功能单元的具体名称也只是为了便于相互区分,并不用于限制本公开实施例的保护范围。
下面参考图8,其示出了适于用来实现本公开实施例的电子设备(例如图8中的终端设备或服务器)800的结构示意图。本公开实施例中的终端设备可以包括但不限于诸如移动电话、笔记本电脑、数字广播接收器、个人数字助理(Personal Digital Assistant,PDA)、平板电脑(PAD)、便携式多媒体播放器(Portable Media Player,PMP)、车载终端(例如车载导航终端)等等的移动终端以及诸如数字电视(Television,TV)、台式计算机等等的固定终端。图8示出的电子设备仅仅是一个示例,不应对本公开实施例的功能和使用范围带来任何限制。
如图8所示,电子设备800可以包括处理装置(例如中央处理器、图形处理器等)801,其可以根据存储在只读存储器(Read-Only Memory,ROM)802中的程序或者从存储装置808加载到随机访问存储器(Random Access Memory,RAM)803中的程序而执行各种适当的动作和处理。在RAM 803中,还存储有电子设备800操作所需的各种程序和数据。处理装置801、ROM 802以及RAM 803通过总线804彼此相连。输入/输出(Input/Output,I/O)接口805也连接至总线804。
通常,以下装置可以连接至I/O接口805:包括例如触摸屏、触摸板、键盘、鼠标、摄像头、麦克风、加速度计、陀螺仪等的输入装置806;包括例如液晶显示器(Liquid Crystal Display,LCD)、扬声器、振动器等的输出装置807;包括例如磁带、硬盘等的存储装置808;以及通信装置809。通信装置809可以允许电子设备800与其他设备进行无线或有线通信以交换数据。虽然图8示出了具有各种装置的电子设备800,但是应理解的是,并不要求实施或具备所有示出的装置。可以替代地实施或具备更多或更少的装置。
特别地,根据本公开的实施例,上文参考流程图描述的过程可以被实现为计 算机软件程序。例如,本公开的实施例包括一种计算机程序产品,其包括承载在非暂态计算机可读介质上的计算机程序,该计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信装置809从网络上被下载和安装,或者从存储装置808被安装,或者从ROM802被安装。在该计算机程序被处理装置801执行时,执行本公开实施例的图像处理方法中限定的上述功能。
本公开实施例提供的电子设备与上述实施例提供的图像处理方法属于同一公开构思,未在本实施例中详尽描述的技术细节可参见上述实施例,并且本实施例与上述实施例具有相同的有益效果。
本公开实施例提供了一种计算机存储介质,其上存储有计算机程序,该程序被处理器执行时实现上述实施例所提供的图像处理方法。
需要说明的是,本公开上述的计算机可读介质可以是计算机可读信号介质或者计算机可读存储介质或者是上述两者的任意组合。计算机可读存储介质例如可以是——但不限于——电、磁、光、电磁、红外线、或半导体的、装置或器件,或者任意以上的组合。计算机可读存储介质的例子可以包括但不限于:具有一个或多个导线的电连接、便携式计算机磁盘、硬盘、随机访问存储器(RAM)、只读存储器(ROM)、可擦式可编程只读存储器(Erasable Programmable Read-Only Memory,EPROM)或闪存(FLASH)、光纤、便携式紧凑磁盘只读存储器(Compact Disc Read-Only Memory,CD-ROM)、光存储器件、磁存储器件、或者上述的任意合适的组合。在本公开中,计算机可读存储介质可以是任何包含或存储程序的有形介质,该程序可以被指令执行、装置或者器件使用或者与其结合使用。而在本公开中,计算机可读信号介质可以包括在基带中或者作为载波一部分传播的数据信号,其中承载了计算机可读的程序代码。这种传播的数据信号可以采用多种形式,包括但不限于电磁信号、光信号或上述的任意合适的组合。计算机可读信号介质还可以是计算机可读存储介质以外的任何计算机可读介质,该计算机可读信号介质可以发送、传播或者传输用于由指令执行、装置或者器件使用或者与其结合使用的程序。计算机可读介质上包含的程序代码可以用任何适当的介质传输,包括但不限于:电线、光缆、射频(Radio Frequency,RF)等等,或者上述的任意合适的组合。
在一些实施方式中,客户端、服务器可以利用诸如HTTP(Hyper Text Transfer Protocol,超文本传输协议)之类的任何当前已知或未来研发的网络协议进行通信,并且可以与任意形式或介质的数字数据通信(例如,通信网络)互连。通信 网络的示例包括局域网(Local Area Network,LAN),广域网(Wide Area Network,WAN),网际网(例如,互联网)以及端对端网络(例如,ad hoc端对端网络),以及任何当前已知或未来研发的网络。
上述计算机可读介质可以是上述电子设备中所包含的;也可以是单独存在,而未装配入该电子设备中。
上述计算机可读介质承载有一个或者多个程序,当上述一个或者多个程序被该电子设备执行时,使得该电子设备:
获得待处理的原始图像;将原始图像输入第一图像处理模型;由第一图像处理模型处理原始图像以生成目标图像;其中,第一图像处理模型和第二图像处理模型在线交替训练生成,第一图像处理模型训练过程中的监督信息包括第二图像处理模型在训练过程中生成的至少部分图像,第一图像处理模型的模型规模小于第二图像处理模型的模型规模;输出目标图像。
存储介质可以是非暂态(non-transitory)存储介质。
可以以一种或多种程序设计语言或其组合来编写用于执行本公开的操作的计算机程序代码,上述程序设计语言包括但不限于面向对象的程序设计语言—诸如Java、Smalltalk、C++,还包括常规的过程式程序设计语言—诸如“C”语言或类似的程序设计语言。程序代码可以完全地在用户计算机上执行、部分地在用户计算机上执行、作为一个独立的软件包执行、部分在用户计算机上部分在远程计算机上执行、或者完全在远程计算机或服务器上执行。在涉及远程计算机的情形中,远程计算机可以通过任意种类的网络——包括局域网(LAN)或广域网(WAN)—连接到用户计算机,或者,可以连接到外部计算机(例如利用因特网服务提供商来通过因特网连接)。
附图中的流程图和框图,图示了按照本公开各种实施例的、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段、或代码的一部分,该模块、程序段、或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本公开实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。其中,单元、模块的名称在某种情况下并不构成对该单元、模块本身的限定。
本文中以上描述的功能可以至少部分地由一个或多个硬件逻辑部件来执行。例如,非限制性地,可以使用的示范类型的硬件逻辑部件包括:现场可编程门阵列(Field Programmable Gate Array,FPGA)、专用集成电路(Application Specific Integrated Circuit,ASIC)、专用标准产品(Application Specific Standard Parts,ASSP)、片上(System on Chip,SOC)、复杂可编程逻辑设备(Complex Programmable Logic Device,CPLD)等等。
在本公开的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行、装置或设备使用或与指令执行、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM)或快闪存储器、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。
根据本公开的一个或多个实施例,提供了一种图像处理方法,该方法包括:
获得待处理的原始图像;
将所述原始图像输入第一图像处理模型;
由所述第一图像处理模型处理所述原始图像以生成目标图像;
其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
输出所述目标图像。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述第二图像处理模型在训练过程中基于真标签数据对进行训练,且根据无标签样本生成伪标签图像,所述伪标签图像基于下述步骤生成:
在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的 过程中,获取无标签样本作为所述第二图像处理模型的输入;
通过所述第二图像处理模型,根据所述无标签样本生成候选伪标签图像;
通过所述判别器对所述候选伪标签图像进行筛选,得到最终的伪标签图像。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的过程中,获取无标签样本作为所述第二图像处理模型的输入,包括:
每获取预设数量的真标签数据对,对所述第二图像处理模型和判别器进行对抗训练后,获取无标签样本作为所述第二图像处理模型的输入。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述通过所述判别器对所述候选伪标签图像进行筛选,包括:
通过所述判别器对所述候选伪标签图像进行真实性评估,得到评估结果;
根据预设评估标准以及所述评估结果,对所述候选伪标签图像进行筛选。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练,包括:
获取所述真标签数据对中的有标签样本,作为所述第二图像处理模型的输入;
通过所述第二图像处理模型,根据所述有标签样本生成第一图像;
获取所述真标签数据对中,与所述有标签样本对应的真标签图像;
通过所述判别器,判别所述第一图像与所述真标签图像是否为相同类型;
以所述判别器判别为相同类型为目标,对所述第二图像处理模型进行训练;
以所述判别器判别为不同类型为目标,对所述判别器进行训练。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的过程中,还包括:
根据所述第一图像与所述真标签图像,确定重建损失;
根据所述重建损失,对所述第二图像处理模型进行训练。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,当所述第一图像处理模型以所述伪标签图像为监督信息时,所述第一图像处理模型基于下述步骤训练:
获取所述伪标签图像对应的无标签样本,作为所述第一图像处理模型的输入;
通过所述第一图像处理模型,根据所述无标签样本生成第二图像;
根据所述伪标签图像和所述第二图像,确定蒸馏损失;
根据所述蒸馏损失,对所述第一图像处理模型进行训练。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,根据所述伪标签图像和所述第二图像,确定蒸馏损失,包括:
根据所述第二图像处理模型生成所述伪标签图像过程中的特征图像,以及所述第一图像处理模型生成所述第二图像过程中的特征图像,确定感知损失;
将所述感知损失作为蒸馏损失。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述感知损失包括下述至少一项:特征重建损失和样式重建损失。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述第一图像处理模型基于下述步骤训练,还包括:
根据所述第二图像确定总变差损失;
相应的,所述根据所述蒸馏损失,对所述第一图像处理模型进行训练,包括:根据所述蒸馏损失和所述总变差损失,对所述第一图像处理模型进行训练。
根据本公开的一个或多个实施例,提供了一种图像处理方法,还包括:
在一些实现方式中,所述第二图像处理模型在训练过程生成的图像,还包括:
所述第二图像处理模型根据所述真标签数据对中的有标签样本,生成的第一图像;
相应的,当所述第一图像处理模型以所述第一图像为监督信息时,所述第一图像处理模型基于下述步骤训练:
获取所述第一图像对应的有标签样本,作为所述第一图像处理模型的输入;
通过所述第一图像处理模型,根据所述有标签样本生成第三图像;
根据所述第一图像和所述第三图像,确定蒸馏损失;
根据所述蒸馏损失,对所述第一图像处理模型进行训练。
根据本公开的一个或多个实施例,所述第一图像处理模型应用于轻量化的终端设备。
根据本公开的一个或多个实施例,提供了一种图像处理装置,该装置包括:
图像获得模块,设置为获得待处理的原始图像;
输入模块,设置为将所述原始图像输入第一图像处理模型;
生成模块,设置为由所述第一图像处理模型处理所述原始图像以生成目标图像;其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
输出模块,设置为输出所述目标图像。
本领域技术人员应当理解,本公开中所涉及的公开范围,并不限于上述技术特征的特定组合而成的实施例,同时也应涵盖在不脱离上述公开构思的情况下,由上述技术特征或其等同特征进行任意组合而形成的其它实施例。例如上述特征与本公开中公开的(但不限于)具有类似功能的技术特征进行互相替换而形成的实施例。
此外,虽然采用特定次序描绘了各操作,但是这不应当理解为要求这些操作以所示出的特定次序或以顺序次序执行来执行。在一定环境下,多任务和并行处理可能是有利的。同样地,虽然在上面论述中包含了若干实现细节,但是这些不应当被解释为对本公开的范围的限制。在单独的实施例的上下文中描述的某些特征还可以组合地实现在单个实施例中。相反地,在单个实施例的上下文中描述的各种特征也可以单独地或以任何合适的子组合的方式实现在多个实施例中。
尽管已经采用特定于结构特征和/或方法逻辑动作的语言描述了本主题,但是应当理解所附权利要求书中所限定的主题未必局限于上面描述的特定特征或动作。相反,上面所描述的特定特征和动作仅仅是实现权利要求书的示例形式。

Claims (15)

  1. 一种图像处理方法,包括:
    获得待处理的原始图像;
    将所述原始图像输入第一图像处理模型;
    由所述第一图像处理模型处理所述原始图像以生成目标图像;
    其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
    输出所述目标图像。
  2. 根据权利要求1所述的方法,其中,所述第二图像处理模型在训练过程中基于真标签数据对进行训练,且根据无标签样本生成伪标签图像,所述伪标签图像基于下述步骤生成:
    在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的过程中,获取无标签样本作为所述第二图像处理模型的输入;
    通过所述第二图像处理模型,根据所述无标签样本生成候选伪标签图像;
    通过所述判别器对所述候选伪标签图像进行筛选,得到最终的伪标签图像。
  3. 根据权利要求2所述的方法,其中,所述在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的过程中,获取无标签样本作为所述第二图像处理模型的输入,包括:
    每获取预设数量的真标签数据对,对所述第二图像处理模型和判别器进行对抗训练后,获取无标签样本作为所述第二图像处理模型的输入。
  4. 根据权利要求2所述的方法,其中,所述通过所述判别器对所述候选伪标签图像进行筛选,包括:
    通过所述判别器对所述候选伪标签图像进行真实性评估,得到评估结果;
    根据预设评估标准以及所述评估结果,对所述候选伪标签图像进行筛选。
  5. 根据权利要求2所述的方法,其中,所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练,包括:
    获取所述真标签数据对中的有标签样本,作为所述第二图像处理模型的输入;
    通过所述第二图像处理模型,根据所述有标签样本生成第一图像;
    获取所述真标签数据对中,与所述有标签样本对应的真标签图像;
    通过所述判别器,判别所述第一图像与所述真标签图像是否为相同类型;
    以所述判别器判别为相同类型为目标,对所述第二图像处理模型进行训练;
    以所述判别器判别为不同类型为目标,对所述判别器进行训练。
  6. 根据权利要求5所述的方法,在所述第二图像处理模型和判别器基于所述真标签数据对进行对抗训练的过程中,还包括:
    根据所述第一图像与所述真标签图像,确定重建损失;
    根据所述重建损失,对所述第二图像处理模型进行训练。
  7. 根据权利要求2所述的方法,其中,当所述第一图像处理模型以所述伪标签图像为监督信息时,所述第一图像处理模型基于下述步骤训练:
    获取所述伪标签图像对应的无标签样本,作为所述第一图像处理模型的输入;
    通过所述第一图像处理模型,根据所述无标签样本生成第二图像;
    根据所述伪标签图像和所述第二图像,确定蒸馏损失;
    根据所述蒸馏损失,对所述第一图像处理模型进行训练。
  8. 根据权利要求7所述的方法,其中,根据所述伪标签图像和所述第二图像,确定蒸馏损失,包括:
    根据所述第二图像处理模型生成所述伪标签图像过程中的特征图像,以及所述第一图像处理模型生成所述第二图像过程中的特征图像,确定感知损失;
    将所述感知损失作为蒸馏损失。
  9. 根据权利要求8所述的方法,其中,所述感知损失包括下述至少一项:特征重建损失和样式重建损失。
  10. 根据权利要求7所述的方法,其中,所述第一图像处理模型基于下述步骤训练,还包括:
    根据所述第二图像确定总变差损失;
    所述根据所述蒸馏损失,对所述第一图像处理模型进行训练,包括:根据所述蒸馏损失和所述总变差损失,对所述第一图像处理模型进行训练。
  11. 根据权利要求1所述的方法,其中,所述第二图像处理模型在训练过程生成的图像,还包括:
    所述第二图像处理模型根据真标签数据对中的有标签样本,生成的第一图像;
    当所述第一图像处理模型以第一图像为监督信息时,所述第一图像处理模型基于下述步骤训练:
    获取所述第一图像对应的有标签样本,作为所述第一图像处理模型的输入;
    通过所述第一图像处理模型,根据所述有标签样本生成第三图像;
    根据所述第一图像和所述第三图像,确定蒸馏损失;
    根据所述蒸馏损失,对所述第一图像处理模型进行训练。
  12. 根据权利要求1所述的方法,其中,所述第一图像处理模型应用于轻量化的终端设备。
  13. 一种图像处理装置,包括:
    图像获得模块,设置为获得待处理的原始图像;
    输入模块,设置为将所述原始图像输入第一图像处理模型;
    生成模块,设置为由所述第一图像处理模型处理所述原始图像以生成目标图像;其中,所述第一图像处理模型和第二图像处理模型在线交替训练生成,所述第一图像处理模型训练过程中的监督信息包括所述第二图像处理模型在训练过程中生成的至少部分图像,所述第一图像处理模型的模型规模小于所述第二图像处理模型的模型规模;
    输出模块,设置为输出所述目标图像。
  14. 一种电子设备,包括:
    一个或多个处理器;
    存储装置,设置为存储一个或多个程序,
    当所述一个或多个程序被所述一个或多个处理器执行,使得所述一个或多个处理器实现如权利要求1-12中任一所述的图像处理方法。
  15. 一种包含计算机可执行指令的存储介质,所述计算机可执行指令在由计算机处理器执行时用于执行如权利要求1-12中任一所述的图像处理方法。
PCT/CN2023/107857 2022-07-22 2023-07-18 图像处理方法、装置、电子设备及存储介质 WO2024017230A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202210873377.2A CN115936980B (zh) 2022-07-22 2022-07-22 一种图像处理方法、装置、电子设备及存储介质
CN202210873377.2 2022-07-22

Publications (1)

Publication Number Publication Date
WO2024017230A1 true WO2024017230A1 (zh) 2024-01-25

Family

ID=86651232

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2023/107857 WO2024017230A1 (zh) 2022-07-22 2023-07-18 图像处理方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN115936980B (zh)
WO (1) WO2024017230A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115936980B (zh) * 2022-07-22 2023-10-20 北京字跳网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034219A (zh) * 2018-07-12 2018-12-18 上海商汤智能科技有限公司 图像的多标签类别预测方法及装置、电子设备和存储介质
EP3767590A1 (en) * 2019-07-19 2021-01-20 Robert Bosch GmbH Device and method for training a generative model
CN114332135A (zh) * 2022-03-10 2022-04-12 之江实验室 一种基于双模型交互学习的半监督医学图像分割方法及装置
CN114511743A (zh) * 2022-01-29 2022-05-17 北京百度网讯科技有限公司 检测模型训练、目标检测方法、装置、设备、介质及产品
CN115936980A (zh) * 2022-07-22 2023-04-07 北京字跳网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质

Family Cites Families (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110796619B (zh) * 2019-10-28 2022-08-30 腾讯科技(深圳)有限公司 一种图像处理模型训练方法、装置、电子设备及存储介质
CN112785507A (zh) * 2019-11-07 2021-05-11 上海耕岩智能科技有限公司 图像处理方法及装置、存储介质、终端
CN111832605B (zh) * 2020-05-22 2023-12-08 北京嘀嘀无限科技发展有限公司 无监督图像分类模型的训练方法、装置和电子设备
CN111898696B (zh) * 2020-08-10 2023-10-27 腾讯云计算(长沙)有限责任公司 伪标签及标签预测模型的生成方法、装置、介质及设备
CN113298152B (zh) * 2021-05-26 2023-12-19 深圳市优必选科技股份有限公司 模型训练方法、装置、终端设备及计算机可读存储介质
CN113449851A (zh) * 2021-07-15 2021-09-28 北京字跳网络技术有限公司 数据处理方法及设备
CN113705709A (zh) * 2021-09-02 2021-11-26 新疆信息产业有限责任公司 一种改进的半监督图像分类方法、设备及存储介质
CN113920370A (zh) * 2021-10-25 2022-01-11 上海商汤智能科技有限公司 模型训练方法、目标检测方法、装置、设备及存储介质
CN114373128A (zh) * 2021-12-30 2022-04-19 山东锋士信息技术有限公司 基于类别自适应伪标签生成的河湖四乱遥感监测方法
CN114581732A (zh) * 2022-03-04 2022-06-03 北京百度网讯科技有限公司 一种图像处理及模型训练方法、装置、设备和存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109034219A (zh) * 2018-07-12 2018-12-18 上海商汤智能科技有限公司 图像的多标签类别预测方法及装置、电子设备和存储介质
EP3767590A1 (en) * 2019-07-19 2021-01-20 Robert Bosch GmbH Device and method for training a generative model
CN114511743A (zh) * 2022-01-29 2022-05-17 北京百度网讯科技有限公司 检测模型训练、目标检测方法、装置、设备、介质及产品
CN114332135A (zh) * 2022-03-10 2022-04-12 之江实验室 一种基于双模型交互学习的半监督医学图像分割方法及装置
CN115936980A (zh) * 2022-07-22 2023-04-07 北京字跳网络技术有限公司 一种图像处理方法、装置、电子设备及存储介质

Also Published As

Publication number Publication date
CN115936980A (zh) 2023-04-07
CN115936980B (zh) 2023-10-20

Similar Documents

Publication Publication Date Title
WO2020155907A1 (zh) 用于生成漫画风格转换模型的方法和装置
US20230394671A1 (en) Image segmentation method and apparatus, and device, and storage medium
CN111402112B (zh) 图像处理方法、装置、电子设备及计算机可读介质
WO2022105638A1 (zh) 图像退化处理方法、装置、存储介质及电子设备
CN111666416B (zh) 用于生成语义匹配模型的方法和装置
WO2022227886A1 (zh) 超分修复网络模型生成方法、图像超分修复方法及装置
WO2022252881A1 (zh) 图像处理方法、装置、可读介质和电子设备
CN110021052B (zh) 用于生成眼底图像生成模型的方法和装置
WO2024017230A1 (zh) 图像处理方法、装置、电子设备及存储介质
CN111738010B (zh) 用于生成语义匹配模型的方法和装置
WO2023217117A1 (zh) 图像评估方法、装置、设备、存储介质和程序产品
WO2023035877A1 (zh) 视频的识别方法、装置、可读介质和电子设备
WO2021190229A1 (zh) 三维视频的处理方法、装置、可读存储介质和电子设备
CN111813889B (zh) 一种提问信息的排序方法、装置、介质和电子设备
WO2023035896A1 (zh) 视频的识别方法、装置、可读介质和电子设备
CN112381717A (zh) 图像处理方法、模型训练方法、装置、介质及设备
WO2023116138A1 (zh) 多任务模型的建模方法、推广内容处理方法及相关装置
WO2023202543A1 (zh) 文字处理方法、装置、电子设备及存储介质
CN117290477A (zh) 一种基于二次检索增强的生成式建筑知识问答方法
CN118071428A (zh) 用于多模态监测数据的智能处理系统及方法
CN113610034B (zh) 识别视频中人物实体的方法、装置、存储介质及电子设备
WO2024061311A1 (zh) 模型训练方法、图像分类方法和装置
CN110362698A (zh) 一种图片信息生成方法、装置、移动终端及存储介质
WO2023202361A1 (zh) 视频生成方法、装置、介质及电子设备
CN111797665B (zh) 用于转换视频的方法和装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 23842293

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 2023842293

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2023842293

Country of ref document: EP

Effective date: 20240626