CN116012688B - Image enhancement method for urban management evaluation system - Google Patents

Image enhancement method for urban management evaluation system Download PDF

Info

Publication number
CN116012688B
CN116012688B CN202310301702.2A CN202310301702A CN116012688B CN 116012688 B CN116012688 B CN 116012688B CN 202310301702 A CN202310301702 A CN 202310301702A CN 116012688 B CN116012688 B CN 116012688B
Authority
CN
China
Prior art keywords
module
modulation
ake
image
spatial modulation
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310301702.2A
Other languages
Chinese (zh)
Other versions
CN116012688A (en
Inventor
肖路
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sunbird Data Consulting Co ltd
Original Assignee
Chengdu Sunbird Data Consulting Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sunbird Data Consulting Co ltd filed Critical Chengdu Sunbird Data Consulting Co ltd
Priority to CN202310301702.2A priority Critical patent/CN116012688B/en
Publication of CN116012688A publication Critical patent/CN116012688A/en
Application granted granted Critical
Publication of CN116012688B publication Critical patent/CN116012688B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Abstract

The invention discloses an image enhancement method for a city management evaluation system, and belongs to the technical fields of city management evaluation systems and image processing. The image enhancement method comprises the steps of obtaining an image to be processed and an artificial neural network model, inputting the image to be processed into a shallow feature mapping layer, inputting a plurality of AKE modules into a shallow feature map, sequentially performing feature extraction operation by each AKE module, modulating the deepened feature map by using a superposition modulation map, inputting the modulated deepened feature map into an enhancement module, outputting the enhancement module to obtain a target image and the like. The superposition modulation module can well map modulation information only by simple operation, and improves operation efficiency. The spatial modulation mechanism can learn the missing condition of the characteristic information under the condition that the resolution ratio is changed, and the condition that the characteristic is disappeared and disturbed is relieved by pre-modulation.

Description

Image enhancement method for urban management evaluation system
Technical Field
The invention belongs to the technical field of urban management evaluation systems and image enhancement, and particularly relates to an image enhancement method for an urban management evaluation system.
Background
Thanks to the popularization of smart phones, the current urban management evaluation system realizes real-time processing of problem acquisition, problem correction and post supervision full-chain mobility, and information can be uploaded or transmitted in various forms such as characters, pictures, videos and the like, so that an information transmission channel is greatly unblocked. Along with the development of technology, some advanced urban management evaluation systems also introduce emerging technologies such as big data, artificial intelligence and the like, and carry out automatic processing treatment on information, so that the intelligent level and the data processing efficiency of the system are effectively improved. From the practical operation experience, the quality of images uploaded by different users in different environments is often uneven, and the low-quality images bring great errors to automatic information acquisition (such as target detection and image identification) and seriously impair the operation efficiency and reliability of the urban management evaluation system.
Disclosure of Invention
In order to overcome the defects in the prior art, the invention provides an image enhancement method for an urban management evaluation system, which improves the accuracy of automatically acquiring image information and enhances the operation efficiency and reliability of the urban management evaluation system by enhancing low-quality images.
In order to achieve the above object, the present invention adopts the following solutions: an image enhancement method for a city management evaluation system, comprising the steps of:
s100, acquiring an image to be processed, and acquiring a trained artificial neural network model; the artificial neural network model is provided with a shallow feature mapping layer, a superposition modulation module, an enhancement module and a plurality of AKE modules, wherein the AKE modules are used for extracting deep features of an image to be processed, and a spatial modulation mechanism is arranged in the AKE modules and is used for generating a spatial modulation chart;
s200, inputting the image to be processed into the shallow feature mapping layer, and outputting a shallow feature map by the shallow feature mapping layer after performing feature extraction operation on the image to be processed;
s300, inputting the shallow feature map into a plurality of AKE modules which are connected in sequence, and sequentially performing feature extraction operation on each AKE module until the last AKE module outputs a deepened feature map, wherein the size of the deepened feature map is the same as that of the shallow feature map;
s400, extracting spatial modulation diagrams generated by the spatial modulation mechanisms in each AKE module, and inputting all the spatial modulation diagrams into the superposition modulation module, wherein the superposition modulation module generates superposition modulation diagrams;
s500, modulating the deepened feature map by using the superposition modulation map, and then inputting the modulated deepened feature map into the enhancement module, wherein the enhancement module outputs an enhanced target image, and the resolution of the target image is larger than that of the image to be processed.
Further, the shallow feature mapping layer is a common convolution operation layer.
Further, the computational process inside AKE is expressed as a mathematical model:
Figure SMS_1
,/>
Figure SMS_2
,/>
Figure SMS_3
,/>
Figure SMS_4
,/>
Figure SMS_5
,/>
Figure SMS_6
wherein KN represents a feature map input to the AKE module, fj1, fj2 and fj3 represent a first convolution operation layer, a second convolution operation layer and a third convolution operation layer respectively, and the convolution kernel size of the first convolution operation layer is smaller than that of the second convolution operation layer; fd denotes a sub-pixel convolution operation layer, fs denotes a stride convolution operation layer, σ1, σ2, σ3, σ4, and σ5 each denote a ReLU activation function []The characteristic diagrams are subjected to splicing operation, x represents element corresponding product operation, OS1 represents the characteristic diagram generated after sigma 1 is activated, OS2 represents the characteristic diagram generated after sigma 2 is activated, OS3 represents the characteristic diagram generated after sigma 3 is activated, OS4 represents the characteristic diagram generated after sigma 5 is activated, the sizes of the characteristic diagrams OS1, OS2 and OS3 are the same as those of the characteristic diagram KN, fx represents a spatial modulation mechanism, ES represents the spatial modulation diagram output by the spatial modulation mechanism, and KO represents the characteristic diagram output by the AKE module.
Further, the calculation process inside the spatial modulation mechanism is expressed as the following mathematical model:
Figure SMS_7
,/>
Figure SMS_8
,/>
Figure SMS_9
wherein OS1, OS2 and OS4 represent feature graphs input into the spatial modulation mechanism, σx1 and σx2 represent ReLU activation functions, VP represents global variance pooling operation, an operation direction thereof is performed along a channel direction, EY1 represents a feature graph generated by performing channel direction global variance pooling on the feature graph after σx1 activation, EY2 represents a feature graph generated by σx2 activation, x represents element corresponding product operation, δx represents sigmoid function, and ES represents a spatial modulation graph generated by the spatial modulation mechanism to output.
Further, the calculation process inside the superposition modulation module is expressed as the following mathematical model:
Figure SMS_10
wherein ES1, ES2, ESm represent the spatial modulation schemes output by the spatial modulation mechanisms within each AKE module, respectively, ES1, ES2, ESm are input to the superposition modulation modules [ i.e. ]]And (3) representing splicing operation, fe representing a fourth convolution operation layer, sigma representing a ReLU activation function, MP representing global maximum pooling operation, wherein the pooling operation direction is performed along a space direction, delta e representing a sigmoid function, and CG representing a superposition modulation chart generated by the superposition modulation module.
The beneficial effects of the invention are as follows:
(1) The invention creatively sets the superposition modulation module in the network model, uses the space modulation diagrams generated in each AKE module as the input of the superposition modulation module, and has less irrelevant interference information input into the superposition modulation module compared with the direct use of the characteristic diagrams in the AKE module as the input, the modulation information can be mapped well by simple operation in the superposition modulation module, thereby improving the operation efficiency and reducing the calculation amount of the model; in addition, as the modulation information input into the superposition modulation module comes from different levels, the superposition modulation module has the effect of integrating the level characteristic information to a certain extent in the process of modulating the deepened characteristic map, and the model does not need to integrate the characteristic maps output by all AKE modules independently, so that the complexity of the model is effectively controlled while the characteristic extraction effect is ensured;
(2) The specific diversity and complexity of the images in the urban management and evaluation system, and in the neural network model, the feature images with different depths contain different kinds of feature information, and in the resolution change process, the different information stability is different; in order to enable the model to have stable enhancement effects on various images, the spatial modulation mechanism takes the feature images (OS 1 and OS 2) after convolution operation and the feature image (OS 4) after resolution scaling as inputs, so that the situation of missing of various feature information under the depth is learned under the condition that the resolution changes, and the pre-modulation of the feature images of various layers is realized by taking element corresponding products of the spatial modulation image and the OS3 feature image, so that the situation of feature disappearance and disorder in the subsequent up-sampling process is relieved;
(3) Inside the spatial modulation mechanism, EY1 is a characteristic diagram obtained by fusing an OS1 characteristic diagram and an OS2 characteristic diagram and then performing variance pooling operation, when EY1 and the squared OS4 are subjected to difference, characteristic information values with small changes before and after resolution change are approaching to 0, the more characteristic information is missing before and after resolution change, the larger the value in EY2 is, the equal proportion enhancement is performed on the missing information before and after resolution change by taking EY1 and EY2 as element corresponding products, the spatial modulation mechanism is simple in internal structure, small in calculated amount and accurate in modulation effect, local information is not excessively enhanced while the feature disappearance is relieved, and the integral enhancement effect of an image is effectively ensured.
Drawings
FIG. 1 is a visual schematic diagram of an artificial neural network model of example 1;
FIG. 2 is a visual schematic of AKE module in example 1;
FIG. 3 is a visual schematic of the spatial modulation mechanism of embodiment 1;
fig. 4 is a visual schematic diagram of the superposition modulation module in embodiment 1;
FIG. 5 is a schematic view of the enhancement module of example 1;
FIG. 6 is a visual schematic of an artificial neural network model of example 2;
in the accompanying drawings:
the system comprises a 1-image to be processed, a 2-shallow feature mapping layer, a 3-AKE module, a 4-superposition modulation module, a 5-enhancement module, a 6-spatial modulation mechanism and a 7-target image.
Detailed Description
The invention is further described below with reference to the accompanying drawings:
example 1:
fig. 1 is a schematic diagram of visualization of an artificial neural network model in this embodiment, where the shallow feature mapping layer 2 is implemented by using a common convolution operation layer with a convolution kernel 3*3, and the step size is 1. When the size of the image 1 to be processed is u×v×3 (height×width×channel), the size of the shallow feature map output by the shallow feature map layer 2 is u×v×64.
The number of AKE modules 3 is 8, a plurality of AKE modules 3 are connected end to end in sequence, and the downstream AKE module 3 takes the characteristic diagram output by the upstream AKE module 3 as input. In this embodiment, as shown in fig. 2, inside each AKE module 3, the first convolution operation layer, the second convolution operation layer and the third convolution operation layer are all normal convolution operation layers, and their step sizes are all 1, the first convolution layer convolution kernel size is 3*3, the second convolution layer convolution kernel size is 5*5, and the third convolution layer convolution kernel size is 3*3. The convolution kernel size of the stride convolution operation layer fs is 3*3, and the step length is 2. The sizes of the feature maps KN, OS1, OS2, OS3 and KO are U.times.V.times.64, the feature map size outputted by the subpixel convolution operation layer fd is 2 U.times.2V.times.16, and the feature map OS4 size outputted by the stride convolution operation layer fs is U.times.V.times.16. The size of the space modulation diagram ES output by the space modulation mechanism 6 is U.times.V.times.1, the space modulation diagram and the OS3 feature diagram are subjected to element corresponding product, and weight parameters with different sizes are distributed for different space positions of the OS3 feature diagram. The last AKE module 3 outputs the same deep feature map size as the shallow feature map, also denoted by u×v×64.
As shown in fig. 3, inside the spatial modulation mechanism 6, the feature map size after σx1 activation is u×v×64, and after the square difference pooling operation is performed along the channel direction, the size of the obtained EY1 feature map is u×v×1. Similarly, after the square difference pooling operation is performed on the feature map OS4 along the channel direction, the size becomes u×v×1, and the feature map EY2 generated after the σx2 activation also has a size of u×v×1.
As shown in fig. 4, inside the superposition modulation module 4, after the ES1, ES2, & gt, ES8 are spliced, a feature map with a size of u×v×8 is obtained, the fourth convolution layer fe is a normal convolution layer with a convolution kernel size of 3*3 and a step size of 1, and the size of the output feature map of the fourth convolution layer is u×v×64. After global maximum pooling operation is performed along the space direction, vectors with the size of 1 x 64 are obtained, and after delta e activation, a superposition modulation diagram with the size of 1 x 64 is obtained. And (3) performing element corresponding product operation on the superimposed modulation diagram and the deepened feature diagram, and distributing corresponding weight parameters for each layer of the deepened feature diagram to realize modulation of each layer of the deepened feature diagram.
The enhancement module 5 is implemented by adopting an existing structure, specifically, in this embodiment, as shown in fig. 5, the enhancement module 5 includes a first enhancement convolution layer, an upsampling layer, and a second enhancement convolution layer that are sequentially connected. The up-sampling layer is realized by adopting a sub-pixel convolution layer, the convolution kernel sizes of the first enhancement convolution layer and the second enhancement convolution layer are 3*3, the step sizes are 1, and the size of the output characteristic diagram of the first enhancement convolution layer is U x V x 64S 2 (S represents the ratio of the resolution of the target image 7 to the resolution of the image 1 to be processed, that is, the artificial neural network model increases the resolution of the image 1 to be processed by a multiple), the size of the up-sampling layer output feature map is su×sv×64, and finally the size of the second enhanced convolution layer output target image 7 is su×sv×3.
The artificial neural network model and the existing DRLN model of this embodiment are trained respectively by using the public data sets DIV2K and Urban100 (in the process of training the two models, parameter settings such as epoch and learning rate are the same), and then the two models are tested on the public data set BSD100 and on the self-built data set (in which the images come from actual field pictures uploaded to the city management evaluation system in the past), and the results are shown in the following table.
Table 1 example 1 and DRLN model test results
Figure SMS_11
From the above test data, the reconstruction result of the artificial neural network model of the embodiment 1 is significantly better than the existing advanced model, regardless of the public data set or the self-built data set, which effectively illustrates the substantial progress of the technical scheme provided by the invention.
Example 2:
embodiment 2 based on embodiment 1, the superposition modulation module 4 is removed, and the artificial neural network model structure of embodiment 2 is shown in fig. 6. Other parts of the model, such as the shallow feature mapping layer 2, the AKE module 3, the spatial modulation mechanism 6 and the enhancement module 5, remain the same as those of embodiment 1. Training and testing on the same dataset gave the following results:
table 2 example 2 test results
Figure SMS_12
As is evident from comparing the data sets of tables 1 and 2, in the absence of the superimposed modulation module 4, the enhancement effect of the model on the image is greatly reduced, thereby proving the prominent effect of the superimposed modulation module 4.
The foregoing examples merely illustrate specific embodiments of the invention, which are described in greater detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (2)

1. An image enhancement method for an urban management evaluation system is characterized in that: the method comprises the following steps:
s100, acquiring an image to be processed, and acquiring a trained artificial neural network model; the artificial neural network model is provided with a shallow feature mapping layer, a superposition modulation module, an enhancement module and a plurality of AKE modules, wherein the AKE modules are used for extracting deep features of an image to be processed, and a spatial modulation mechanism is arranged in the AKE modules and is used for generating a spatial modulation chart;
s200, inputting the image to be processed into the shallow feature mapping layer, and outputting a shallow feature map by the shallow feature mapping layer after performing feature extraction operation on the image to be processed;
s300, inputting the shallow feature map into a plurality of AKE modules which are connected in sequence, and sequentially performing feature extraction operation on each AKE module until the last AKE module outputs a deepened feature map, wherein the size of the deepened feature map is the same as that of the shallow feature map;
s400, extracting spatial modulation diagrams generated by the spatial modulation mechanisms in each AKE module, and inputting all the spatial modulation diagrams into the superposition modulation module, wherein the superposition modulation module generates superposition modulation diagrams;
s500, modulating the deepened feature map by using the superposition modulation map, and then inputting the modulated deepened feature map into the enhancement module, wherein the enhancement module outputs an enhanced target image, and the resolution of the target image is larger than that of the image to be processed;
the computational process inside AKE is expressed as a mathematical model:
Figure QLYQS_1
Figure QLYQS_2
,/>
Figure QLYQS_3
,/>
Figure QLYQS_4
Figure QLYQS_5
,/>
Figure QLYQS_6
wherein KN represents a feature map input to the AKE module, fj1, fj2 and fj3 represent a first convolution operation layer, a second convolution operation layer and a third convolution operation layer respectively, and the convolution kernel size of the first convolution operation layer is smaller than that of the second convolution operation layer; fd denotes a sub-pixel convolution operation layer, fs denotes a stride convolution operation layer, σ1, σ2, σ3, σ4, and σ5 each denote a ReLU activation function []The characteristic diagrams are subjected to splicing operation, x represents element corresponding product operation, OS1 represents a characteristic diagram generated after sigma 1 is activated, OS2 represents a characteristic diagram generated after sigma 2 is activated, OS3 represents a characteristic diagram generated after sigma 3 is activated, OS4 represents a characteristic diagram generated after sigma 5 is activated, the sizes of the characteristic diagrams OS1, OS2 and OS3 are the same as those of a characteristic diagram KN, fx represents a spatial modulation mechanism, ES represents a spatial modulation diagram output by the spatial modulation mechanism, KO represents the characteristic diagram output by the AKE module, and the spatial modulation mechanism takes the characteristic diagrams OS1, OS2 and OS4 as inputs;
the computation process inside the spatial modulation mechanism is expressed as a mathematical model:
Figure QLYQS_7
,/>
Figure QLYQS_8
,/>
Figure QLYQS_9
wherein OS1, OS2 and OS4 represent feature graphs input into the spatial modulation mechanism, σx1 and σx2 represent ReLU activation functions, VP represents global variance pooling operation, the operation direction of which is performed along the channel direction, EY1 represents feature graphs generated by performing channel direction global variance pooling on feature graphs after σx1 activation, EY2 represents feature graphs generated by σx2 activation, x represents element corresponding product operation, δx represents sigmoid function, ES represents spatial modulation mechanismGenerating an output space modulation diagram;
the calculation process inside the superposition modulation module is expressed as the following mathematical model:
Figure QLYQS_10
wherein ES1, ES2, ESm represent the spatial modulation schemes output by the spatial modulation mechanisms within each AKE module, respectively, ES1, ES2, ESm are input to the superposition modulation modules [ i.e. ]]And (3) representing splicing operation, fe representing a fourth convolution operation layer, sigma representing a ReLU activation function, MP representing global maximum pooling operation, wherein the pooling operation direction is performed along a space direction, delta e representing a sigmoid function, and CG representing a superposition modulation chart generated by the superposition modulation module.
2. The image enhancement method for a municipal administration evaluation system according to claim 1, wherein: the shallow feature mapping layer is a common convolution operation layer.
CN202310301702.2A 2023-03-27 2023-03-27 Image enhancement method for urban management evaluation system Active CN116012688B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310301702.2A CN116012688B (en) 2023-03-27 2023-03-27 Image enhancement method for urban management evaluation system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310301702.2A CN116012688B (en) 2023-03-27 2023-03-27 Image enhancement method for urban management evaluation system

Publications (2)

Publication Number Publication Date
CN116012688A CN116012688A (en) 2023-04-25
CN116012688B true CN116012688B (en) 2023-06-09

Family

ID=86025159

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310301702.2A Active CN116012688B (en) 2023-03-27 2023-03-27 Image enhancement method for urban management evaluation system

Country Status (1)

Country Link
CN (1) CN116012688B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104058A1 (en) * 2019-11-26 2021-06-03 中国科学院深圳先进技术研究院 Image segmentation method and apparatus, and terminal device
CN114820328A (en) * 2022-06-27 2022-07-29 威海职业学院(威海市技术学院) Image super-resolution reconstruction method based on convolutional neural network
CN115018711A (en) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 Image super-resolution reconstruction method for warehouse scheduling

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7280987B2 (en) * 2004-03-26 2007-10-09 Halliburton Energy Services, Inc. Genetic algorithm based selection of neural network ensemble for processing well logging data
CN115358931B (en) * 2022-10-20 2023-01-03 运易通科技有限公司 Image reconstruction method and device for warehouse logistics system
CN115546031B (en) * 2022-12-01 2023-03-24 运易通科技有限公司 Image enhancement method and device for warehouse ceiling inspection

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2021104058A1 (en) * 2019-11-26 2021-06-03 中国科学院深圳先进技术研究院 Image segmentation method and apparatus, and terminal device
CN114820328A (en) * 2022-06-27 2022-07-29 威海职业学院(威海市技术学院) Image super-resolution reconstruction method based on convolutional neural network
CN115018711A (en) * 2022-07-15 2022-09-06 成都运荔枝科技有限公司 Image super-resolution reconstruction method for warehouse scheduling

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Channel-Wise and Spatial Feature Modulation Network for Single Image Super-Resolution;Yanting Hu 等;《IEEE TRANSACTIONS ON CIRCUITS AND SYSTEMS FOR VIDEO TECHNOLOGY》;1-16 *
基于注意力机制增强残差网络的雷达信号调制类型识别;呙鹏程 等;《兵工学报》;1-11 *
融合时空特性的孪生网络视觉跟踪;姜珊 等;《兵工学报》;1940-1950 *

Also Published As

Publication number Publication date
CN116012688A (en) 2023-04-25

Similar Documents

Publication Publication Date Title
CN112699847B (en) Face characteristic point detection method based on deep learning
CN114119582B (en) Synthetic aperture radar image target detection method
CN108875935B (en) Natural image target material visual characteristic mapping method based on generation countermeasure network
Ding et al. Depth-aware saliency detection using convolutional neural networks
CN110120020A (en) A kind of SAR image denoising method based on multiple dimensioned empty residual error attention network
CN109389667B (en) High-efficiency global illumination drawing method based on deep learning
CN111582092B (en) Pedestrian abnormal behavior detection method based on human skeleton
CN110222760A (en) A kind of fast image processing method based on winograd algorithm
CN110675326A (en) Method for calculating ghost imaging reconstruction recovery based on U-Net network
CN110335299A (en) A kind of monocular depth estimating system implementation method based on confrontation network
Lin et al. 3D environmental perception modeling in the simulated autonomous-driving systems
CN116012688B (en) Image enhancement method for urban management evaluation system
CN111275642B (en) Low-illumination image enhancement method based on significant foreground content
CN116246184A (en) Papaver intelligent identification method and system applied to unmanned aerial vehicle aerial image
Guo et al. MDSFE: Multiscale deep stacking fusion enhancer network for visual data enhancement
CN115358981A (en) Glue defect determining method, device, equipment and storage medium
Jin et al. Semantic segmentation of remote sensing images based on dilated convolution and spatial-channel attention mechanism
Lee et al. SAF-Nets: Shape-Adaptive Filter Networks for 3D point cloud processing
Yang et al. Cascaded deep residual learning network for single image dehazing
CN112801020A (en) Pedestrian re-identification method and system based on background graying
CN108664266B (en) Portable artificial intelligence device and configuration method thereof
LU500193B1 (en) Low-illumination image enhancement method and system based on multi-expression fusion
CN115984583B (en) Data processing method, apparatus, computer device, storage medium, and program product
Li Convolutional Neural Network-Based Virtual Reality Real-Time Interactive System Design for Unity3D
Liu et al. MODE: Monocular omnidirectional depth estimation via consistent depth fusion

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant