CN111192206A - Method for improving image definition - Google Patents
Method for improving image definition Download PDFInfo
- Publication number
- CN111192206A CN111192206A CN201911217831.3A CN201911217831A CN111192206A CN 111192206 A CN111192206 A CN 111192206A CN 201911217831 A CN201911217831 A CN 201911217831A CN 111192206 A CN111192206 A CN 111192206A
- Authority
- CN
- China
- Prior art keywords
- image
- loss function
- network
- generator
- input
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 29
- 238000012549 training Methods 0.000 claims abstract description 12
- ORILYTVJVMAKLC-UHFFFAOYSA-N Adamantane Natural products C1C(C2)CC3CC1CC2C3 ORILYTVJVMAKLC-UHFFFAOYSA-N 0.000 claims abstract description 9
- 230000006870 function Effects 0.000 claims description 62
- 238000004364 calculation method Methods 0.000 claims description 14
- 238000005457 optimization Methods 0.000 claims description 10
- 238000000605 extraction Methods 0.000 claims description 6
- 238000007781 pre-processing Methods 0.000 claims description 6
- 230000009467 reduction Effects 0.000 claims description 6
- 238000013507 mapping Methods 0.000 claims description 4
- 230000004913 activation Effects 0.000 claims description 3
- 238000012937 correction Methods 0.000 claims description 3
- 230000002708 enhancing effect Effects 0.000 claims description 3
- 230000008569 process Effects 0.000 claims description 3
- 230000006872 improvement Effects 0.000 description 3
- 101100391172 Dictyostelium discoideum forA gene Proteins 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 230000003321 amplification Effects 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000012544 monitoring process Methods 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000012545 processing Methods 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 230000001953 sensory effect Effects 0.000 description 1
Images
Classifications
-
- G06T5/70—
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
- G06N3/084—Backpropagation, e.g. using gradient descent
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
Abstract
The invention discloses a method for improving image definition, which comprises the following steps: acquiring characteristic information of an input image by using a generator network, and generating an image by using the generator network according to the acquired characteristic information; respectively transmitting the image and the real image generated by the generator network into a discriminator network, judging whether the generated image is the real image or not by using the discriminator network, and extracting the characteristic information of the image and the real image generated by the generator; calculating a loss function by using an Adam algorithm according to the extracted characteristic information, and continuously updating parameters of the loss function until the parameters reach the optimal parameters; and finishing training to generate a clear image. The method can effectively improve the image definition and has extremely high application value.
Description
Technical Field
The invention belongs to the field of computer vision and deep learning, relates to multiple subjects such as computer vision, digital image processing, artificial intelligence, computer science and the like, and particularly relates to a method for improving image definition.
Background
With the development of image technology, people have higher and higher requirements on image definition, high-definition equipment is already applied to various aspects of our lives in a large scale, and China is satisfied with various ultra-high-definition equipment on the celebration in seventy-year. Therefore, the improvement of the image definition is particularly important, and the method not only provides people with higher sensory experience, but also has wide application space in various research fields.
The image definition is low due to a plurality of reasons, including atmospheric factors, brightness factors, artificial factors and the like, and the image definition is reduced due to various reasons, such as a shooting mode, image amplification, equipment obsolescence, physical damage and the like, so that the improvement of the image definition becomes an urgent requirement of people. General monitoring refers to a large-range scene, such as a square pedestrian flow situation, a highway traffic flow situation and the like, and high-definition images are needed for face recognition, license plate recognition and the like; the method has higher requirements on the definition of a picture in the aspect of target identification, and can accurately present characters, license plates, characters, marks and the like in the image; detailed feature recognition is mainly used in special application places such as bank counters, ATMs, casinos and the like, and more detailed features are required to be obtained on the basis of object recognition. Therefore, the improvement of the image definition has important significance in various aspects.
However, there is no effective method for further improving the image definition in the prior art, so a new technical solution is urgently needed to solve the problem.
Disclosure of Invention
The purpose of the invention is as follows: in order to overcome the defects in the prior art, the method for improving the image definition is provided, and the image definition can be effectively improved.
The technical scheme is as follows: in order to achieve the above object, the present invention provides a method for improving image sharpness, comprising the steps of:
s1: acquiring characteristic information of an input image by using a generator network, and generating an image by using the generator network according to the acquired characteristic information;
s2: respectively transmitting the image and the real image generated by the generator network into a discriminator network, judging whether the generated image is the real image or not by using the discriminator network, and extracting the characteristic information of the image and the real image generated by the generator;
s3: calculating a loss function by using an Adam algorithm according to the characteristic information extracted in the step S2, and continuously updating parameters of the loss function until the parameters reach the optimal parameters;
s4: and finishing training to generate a clear image.
Further, in step S1, before the generator network performs feature information acquisition on the input image, the input image is subjected to enhancement preprocessing, and the operations of performing enhancement preprocessing on the image are performed without changing image pixel information, where the operations include: cutting, turning over and the like.
Further, the enhancing pretreatment specifically comprises: mapping the value of each pixel point of the image from 0-255 to 0-1, randomly cutting to obtain a 24x24 image, inputting the image into the model, randomly turning left and right/up and down, and if the input image is turned left and right/up and down, performing the same operation on the corresponding input image in the training process.
Further, the step S1 is specifically: in the generator network, a plurality of residual error network blocks are used for carrying out feature extraction on an input image step by step, except for a first residual error block, the input of each residual error block is the summation of the input and the output of the previous residual error block in pixel level, finally, the input of the first residual error block and the output of the last residual error block are subjected to pixel level summation, and an image with the same channel number as the input image is obtained through dimensionality reduction and serves as the output of the whole generator network.
Further, the pixel-level summation in step S1 is implemented by a residual error network and a jump connection, which specifically includes: feature maps of deep networks are stacked with feature maps of deep networks, which are imported from shallow networks, and convolutional layers behind select the influence of shallow and deep features on the final prediction by weights.
Further, the determination method using the discriminator network in step S2 is: after convolution-activation-regularization operations are carried out for multiple times, the characteristics of the image and the real image generated by the generator network are extracted, the generated image is judged to be true or false through a full connection layer and a Sigmoid activation function, and the result is applied to a discriminator loss function.
Further, the loss function in step S3 includes a generator loss function and a discriminator loss function, and the calculation formula of the discriminator loss function is as follows:
wherein ldisRepresenting the discriminant loss function, ITRepresenting true high definition images, ILRepresenting a blurred image input to the generator network,representing images generated by a network of generators, we wish for an arbiterAs large as possible because the output of the determination of the real image by the discriminator must be true; hope forAs small as possible, since the output of the decision by the discriminator to generate the image must be false, and thus the function has a different growth tendency and cannot be optimized, the calculation is performedThe optimization objectives expected by the two formulas are the same and can be optimized together, but the expectation is the maximum value at the moment, and the expectation is the minimum value at the time of program implementation, so the inverse is taken asAs a discriminator loss function;
the generator loss function comprises two parts of a content loss function and a counter loss function:
the content loss function is calculated as follows:
wherein lconThe content loss function is represented by the L1 norm, W and H representing the width and height of the image respectively,representing each pixel of a true high definition image, ILRepresenting a blurred image input to the generator network,representing each pixel of the image generated by the generator network through feature extraction and dimension reduction, and solving the minimum value of the Manhattan distance as an optimal solution through the optimization of an L1 loss function between the generated image and a real high-definition image;
the calculation formula of the penalty function is as follows:
wherein ladvWhich represents a function of the resistance loss,the expression that the image generated by the generator is input into the discriminator, and the logarithm operation is added into all the calculation formulas to reduce the influence caused by the unilateral effect and fluctuation of data distribution, so that a plurality of numerical problems are avoided in the actual program implementation.
Furthermore, the Adam algorithm in step S3 introduces quadratic gradient correction, performs learning rate attenuation after training, and continuously updates the parameters to achieve the optimum by calculating the derivative of loss to the parameters needing training in the specified calculation graph.
Has the advantages that: compared with the prior art, the method has the advantages that the GAN framework is used, the generation network takes the fuzzy image and the real high-definition image as input, pixel-level mapping is carried out through the residual error network and jump connection, the characteristic image transmitted from the shallow network to the deep network is stacked with the deep characteristic image, and finally parameters are continuously updated through calculation of gradient back propagation of a loss function by using an Adam optimization algorithm, so that the image definition can be effectively improved, and the method has a good application prospect.
Drawings
FIG. 1 is a flow chart of the method of the present invention;
FIG. 2 is a flow chart of an image enhancement pre-processing method;
FIG. 3 is a generator workflow diagram;
FIG. 4 is a flow chart of the arbiter operation;
FIG. 5 is a flow chart of the model loss function of the present invention.
Detailed Description
The invention is further elucidated with reference to the drawings and the embodiments.
As shown in fig. 1, the present invention provides a method for improving image sharpness, comprising the following steps:
step 1: inputting an image, and performing an enhancement preprocessing operation on the image, wherein the operation is performed under the condition that image pixel information is not changed, and the operation comprises the following steps: cutting, turning over and the like.
Step 2: and collecting characteristic information of the input image by using a generator network, wherein the collected characteristic information comprises low-layer characteristic information and high-layer characteristic information.
And step 3: the generator network generates an image according to the acquired characteristic information;
and 4, step 4: and respectively transmitting the image generated by the generator network and the real high-definition image into a discriminator network, judging whether the generated image is a real image or not by using the discriminator network, and extracting the characteristic information of the generated image and the real image of the generator.
And 5: calculating a loss function by using an Adam algorithm according to the extracted characteristic information, and continuously updating parameters of the loss function until the parameters reach the optimal parameters;
step 6: and finishing training to generate a clear image.
As shown in fig. 2, the enhancing preprocessing operation in step 1 in this embodiment specifically includes: mapping the value of each pixel point of an input image from 0-255 to 0-1, randomly cutting to obtain a 24x24 image, inputting the image into a generation model for improving the image definition, turning, wherein turning comprises turning left and right or up and down, and turning left and right/up and down randomly, and if the input image is turned left and right/up and down, the corresponding real high-definition image needs to be operated in the same way in the training process.
As shown in fig. 3, the specific workflow of the generator in steps 2 and 3 in this embodiment is as follows: in a generator network, a plurality of residual error network blocks are used for carrying out feature extraction on an input image step by step, except for a first residual error block, the input of each residual error block is the summation of the input and the output of the previous residual error block in pixel level, finally, the input of the first residual error block and the output of the last residual error block are subjected to pixel level summation, and an image with the same channel number as the input image is obtained through dimensionality reduction to serve as the output of the whole generation network. The summation operation is realized by residual error network and jump connection, the feature map transmitted into the deep layer network from the shallow layer network is stacked with the deep layer feature map, and the convolution layer selects the influence of the shallow layer feature and the deep layer feature on the final prediction by weight and generates an image.
As shown in fig. 4, the specific work flow of the discriminator in step 4 in this embodiment is as follows: in the discriminator network, the generator generated image and the real high-definition image are transmitted into a discriminator, and after a plurality of convolution-activation-regularization operations, the characteristics of the generator generated image and the real high-definition image are respectively extracted. And finally, judging whether the signals are true or false through a full connection layer (Dense) and a Sigmoid activation function, and applying the result to a discriminator loss function.
As shown in fig. 5, the loss function in step 5 mainly includes two blocks, which are a generator loss function and a discriminator loss function:
(1) the formula for calculating the loss function of the discriminator is as follows:
wherein ldisRepresenting the discriminant loss function, ITRepresenting true high definition images, ILRepresenting a blurred image input to the generator network,representing images generated by a network of generators, we wish for an arbiterAs large as possible because the output of the determination of the real image by the discriminator must be true; hope forAs small as possible, since the output of the decision by the discriminator to generate the image must be false, and thus the function has a different growth tendency and cannot be optimized, the calculation is performedThe optimization objectives expected by the two formulas are the same and can be optimized together, but the expectation is the maximum value at the moment, and the expectation is the minimum value at the time of program implementation, so the inverse is taken asAs a function of the discriminator penalty.
(2) The generator loss function mainly comprises two parts: a content loss function and a counter loss function.
① the formula for the calculation of the content loss function is as follows:
wherein lconThe content loss function is represented by the L1 norm, W and H representing the width and height of the image respectively,representing each pixel of a true high definition image, ILRepresenting a blurred image input to the generator network,representing each pixel of the image generated by the generator network through feature extraction and dimension reduction. Through optimization of an L1 loss function between the generated image and the real high-definition image, the minimum value of the Manhattan distance is solved as an optimal solution;
② the formula for calculating the opposition loss function is as follows:
wherein ladvWhich represents a function of the resistance loss,the expression that the image generated by the generator is input into the discriminator, and the logarithm operation is added into all the calculation formulas to reduce the influence caused by the unilateral effect and fluctuation of data distribution, so that a plurality of numerical problems are avoided in the actual program implementation.
The content loss function and the counter loss function together form a generator loss function, and the minimum value is continuously obtained in the optimization.
As shown in fig. 5, in step 5, on a model composed of a generator loss function and a discriminator loss function, an Adam optimization algorithm is used for calculation, which is an optimization algorithm for finding a global optimum point, and quadratic gradient correction is introduced, wherein an exponential decay rate β 1 of a first-order moment estimate in the Adam optimization algorithm is set to 0.9, an exponential decay rate β 2 of a second-order moment estimate is set to 0.999, 10000 times of training are performed, learning rate decay is performed after five thousand times of training, the decay rate is set to 0.1, in a batch, the generator is updated twice, the discriminator is updated once to prevent the discriminant from being over-trained, and parameters are continuously updated to be optimal by calculating derivatives of loss functions to parameters needing to be trained in a given calculation diagram.
Claims (8)
1. A method for improving image definition is characterized in that: the method comprises the following steps:
s1: acquiring characteristic information of an input image by using a generator network, and generating an image by using the generator network according to the acquired characteristic information;
s2: respectively transmitting the image and the real image generated by the generator network into a discriminator network, judging whether the generated image is the real image or not by using the discriminator network, and extracting the characteristic information of the image and the real image generated by the generator;
s3: calculating a loss function by using an Adam algorithm according to the characteristic information extracted in the step S2, and continuously updating parameters of the loss function until the parameters reach the optimal parameters;
s4: and finishing training to generate a clear image.
2. A method of improving image sharpness according to claim 1, wherein: in step S1, before the generator network performs feature information acquisition on the input image, the input image is subjected to enhancement preprocessing.
3. A method of improving image sharpness according to claim 1, wherein: the enhancing pretreatment specifically comprises the following steps: mapping the value of each pixel point of the image from 0-255 to 0-1, randomly cutting to obtain a 24x24 image, inputting the image into the model, randomly turning left and right/up and down, and if the input image is turned left and right/up and down, performing the same operation on the corresponding input image in the training process.
4. A method of improving image sharpness according to claim 1, wherein: the step S1 specifically includes: in the generator network, a plurality of residual error network blocks are used for carrying out feature extraction on an input image step by step, except for a first residual error block, the input of each residual error block is the summation of the input and the output of the previous residual error block in pixel level, finally, the input of the first residual error block and the output of the last residual error block are subjected to pixel level summation, and an image with the same channel number as the input image is obtained through dimensionality reduction and serves as the output of the whole generator network.
5. A method of improving image sharpness according to claim 4, wherein: the pixel-level summation in step S1 is implemented by a residual error network and a jump connection, which specifically includes: feature maps of deep networks are stacked with feature maps of deep networks, which are imported from shallow networks, and convolutional layers behind select the influence of shallow and deep features on the final prediction by weights.
6. A method of improving image sharpness according to claim 1, wherein: the judgment method using the discriminator network in step S2 is: after convolution-activation-regularization operation is carried out for multiple times, the characteristics of the image generated by the generator network and the characteristics of the real image are extracted, and the generated image is judged to be true or false through a full connection layer and a Sigmoid activation function.
7. A method of improving image sharpness according to claim 1, wherein: the loss function in step S3 includes a generator loss function and a discriminator loss function, and the calculation formula of the discriminator loss function is as follows:
wherein ldisRepresenting the discriminant loss function, ITRepresenting a real image, ILRepresenting a blurred image input to the generator network,representing an image generated by a generator network;
the generator loss function comprises two parts of a content loss function and a counter loss function:
the content loss function is calculated as follows:
wherein lconThe content loss function is represented by the L1 norm, W and H representing the width and height of the image respectively,representing each pixel, I, of a real imageLRepresenting a blurred image input to the generator network,representing each pixel of the image generated by the generator network through feature extraction and dimension reduction, and solving the minimum value of the Manhattan distance as an optimal solution through the optimization of an L1 loss function between the generated image and a real high-definition image;
the calculation formula of the penalty function is as follows:
8. A method of improving image sharpness according to claim 1, wherein: in the step S3, the Adam algorithm introduces quadratic gradient correction, learning rate attenuation is carried out after training, and parameters are continuously updated to be optimal by calculating the derivative of loss to parameters needing to be trained in a specified calculation graph.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911217831.3A CN111192206A (en) | 2019-12-03 | 2019-12-03 | Method for improving image definition |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911217831.3A CN111192206A (en) | 2019-12-03 | 2019-12-03 | Method for improving image definition |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111192206A true CN111192206A (en) | 2020-05-22 |
Family
ID=70710761
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911217831.3A Pending CN111192206A (en) | 2019-12-03 | 2019-12-03 | Method for improving image definition |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111192206A (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784721A (en) * | 2020-07-01 | 2020-10-16 | 华南师范大学 | Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning |
CN112435192A (en) * | 2020-11-30 | 2021-03-02 | 杭州小影创新科技股份有限公司 | Lightweight image definition enhancing method |
CN113379624A (en) * | 2021-05-31 | 2021-09-10 | 北京达佳互联信息技术有限公司 | Image generation method, training method, device and equipment of image generation model |
WO2022077417A1 (en) * | 2020-10-16 | 2022-04-21 | 京东方科技集团股份有限公司 | Image processing method, image processing device and readable storage medium |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190138838A1 (en) * | 2017-11-09 | 2019-05-09 | Boe Technology Group Co., Ltd. | Image processing method and processing device |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
CN110136063A (en) * | 2019-05-13 | 2019-08-16 | 南京信息工程大学 | A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110211045A (en) * | 2019-05-29 | 2019-09-06 | 电子科技大学 | Super-resolution face image method based on SRGAN network |
-
2019
- 2019-12-03 CN CN201911217831.3A patent/CN111192206A/en active Pending
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20190138838A1 (en) * | 2017-11-09 | 2019-05-09 | Boe Technology Group Co., Ltd. | Image processing method and processing device |
CN109785258A (en) * | 2019-01-10 | 2019-05-21 | 华南理工大学 | A kind of facial image restorative procedure generating confrontation network based on more arbiters |
CN110136063A (en) * | 2019-05-13 | 2019-08-16 | 南京信息工程大学 | A kind of single image super resolution ratio reconstruction method generating confrontation network based on condition |
CN110211045A (en) * | 2019-05-29 | 2019-09-06 | 电子科技大学 | Super-resolution face image method based on SRGAN network |
Non-Patent Citations (1)
Title |
---|
高媛 等: "《基于深度残差生成对抗网络的医学影像超分辨率算法》", 《计算机应用》 * |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111784721A (en) * | 2020-07-01 | 2020-10-16 | 华南师范大学 | Ultrasonic endoscopic image intelligent segmentation and quantification method and system based on deep learning |
WO2022077417A1 (en) * | 2020-10-16 | 2022-04-21 | 京东方科技集团股份有限公司 | Image processing method, image processing device and readable storage medium |
CN112435192A (en) * | 2020-11-30 | 2021-03-02 | 杭州小影创新科技股份有限公司 | Lightweight image definition enhancing method |
CN112435192B (en) * | 2020-11-30 | 2023-03-14 | 杭州小影创新科技股份有限公司 | Lightweight image definition enhancing method |
CN113379624A (en) * | 2021-05-31 | 2021-09-10 | 北京达佳互联信息技术有限公司 | Image generation method, training method, device and equipment of image generation model |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | Multi-level fusion and attention-guided CNN for image dehazing | |
CN111192206A (en) | Method for improving image definition | |
CN108510485B (en) | Non-reference image quality evaluation method based on convolutional neural network | |
CN109558811B (en) | Motion recognition method based on motion foreground attention and unsupervised key frame extraction | |
CN111340123A (en) | Image score label prediction method based on deep convolutional neural network | |
CN105657402A (en) | Depth map recovery method | |
CN107680077A (en) | A kind of non-reference picture quality appraisement method based on multistage Gradient Features | |
CN110781882A (en) | License plate positioning and identifying method based on YOLO model | |
CN110827312A (en) | Learning method based on cooperative visual attention neural network | |
CN111209858A (en) | Real-time license plate detection method based on deep convolutional neural network | |
CN110992365A (en) | Loss function based on image semantic segmentation and design method thereof | |
CN112633234A (en) | Method, device, equipment and medium for training and applying face glasses-removing model | |
CN110599458A (en) | Underground pipe network detection and evaluation cloud system based on convolutional neural network | |
CN116580184A (en) | YOLOv 7-based lightweight model | |
CN112508851A (en) | Mud rock lithology recognition system based on CNN classification algorithm | |
CN111178503A (en) | Mobile terminal-oriented decentralized target detection model training method and system | |
CN111126155A (en) | Pedestrian re-identification method for generating confrontation network based on semantic constraint | |
CN111369477A (en) | Method for pre-analysis and tool self-adaptation of video recovery task | |
CN115966006A (en) | Cross-age face recognition system based on deep learning model | |
CN109919964A (en) | The method that Gaussian Background modeling technique based on mathematical morphology carries out image procossing | |
CN116258867A (en) | Method for generating countermeasure sample based on low-perceptibility disturbance of key region | |
CN112200831B (en) | Dynamic template-based dense connection twin neural network target tracking method | |
CN113554685A (en) | Method and device for detecting moving target of remote sensing satellite, electronic equipment and storage medium | |
CN113723414A (en) | Mask face shelter segmentation method and device | |
Wang et al. | Multi-Patch and Feature Fusion Network for Single Image Dehazing |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20200522 |