CN117788477A - Image reconstruction method and device for automatically quantifying tea leaf curl - Google Patents

Image reconstruction method and device for automatically quantifying tea leaf curl Download PDF

Info

Publication number
CN117788477A
CN117788477A CN202410212090.4A CN202410212090A CN117788477A CN 117788477 A CN117788477 A CN 117788477A CN 202410212090 A CN202410212090 A CN 202410212090A CN 117788477 A CN117788477 A CN 117788477A
Authority
CN
China
Prior art keywords
feature map
module
feature
image
convolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202410212090.4A
Other languages
Chinese (zh)
Inventor
许粟
陶光灿
陈瀚
扶胜
费强
李彬
马风伟
陈海江
吴思瑶
史大娟
刘宇泽
宋林瑶
邓君仪
刘鑫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guizhou Jianyitest Technology Co ltd
Guiyang University
Original Assignee
Guizhou Jianyitest Technology Co ltd
Guiyang University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guizhou Jianyitest Technology Co ltd, Guiyang University filed Critical Guizhou Jianyitest Technology Co ltd
Priority to CN202410212090.4A priority Critical patent/CN117788477A/en
Publication of CN117788477A publication Critical patent/CN117788477A/en
Pending legal-status Critical Current

Links

Classifications

    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02PCLIMATE CHANGE MITIGATION TECHNOLOGIES IN THE PRODUCTION OR PROCESSING OF GOODS
    • Y02P90/00Enabling technologies with a potential contribution to greenhouse gas [GHG] emissions mitigation
    • Y02P90/30Computing systems specially adapted for manufacturing

Landscapes

  • Image Processing (AREA)

Abstract

The invention discloses an image reconstruction method and device for automatically quantifying the curling degree of tea, and belongs to the technical field of tea and image processing. The reconstruction method comprises the steps of obtaining an initial tea image and calling an image reconstruction network; extracting features of the initial tea image based on the feature enhancement module to determine an indication feature map; performing feature extraction on the indication feature map based on a feature processing module to determine a matching feature map; reconstructing the matched feature map based on the feature building module to determine a target image and the like. The invention creatively designs the feature enhancement module at the head of the network, uses common convolution and cavity convolution at the same time, inspects and learns the image feature information from different view angles, realizes the deep blending of different information under the guidance of the second weight, enhances the learning ability of the network on different types of image features, and has strong robustness.

Description

Image reconstruction method and device for automatically quantifying tea leaf curl
Technical Field
The invention belongs to the technical field of tea and image processing, and particularly relates to an image reconstruction method and device for automatically quantifying the curling degree of tea.
Background
Many pieces of information about the quality of tea can be obtained from the appearance characteristics of tea, and in order to avoid inaccuracy of results caused by manual judgment, researchers have collected electronic images of tea, and the curling degree of tea is automatically quantified by using an image algorithm (for example, patent CN115655144 a). As the methods all need to carry out edge detection, skeleton detection, texture feature extraction and the like on the tea image at the pixel level, practical application shows that the quality of the image determines the accuracy degree of the quantization result to a great extent. On the other hand, the appearance features of dry tea have diversity, complexity and minuteness, and high-quality images depend on professional equipment and personnel, resulting in high image acquisition costs.
Disclosure of Invention
Aiming at the phenomenon, the invention provides an image reconstruction method and device for automatically quantifying the curling degree of tea, and the method provided by the invention is used for processing the tea image so as to improve the quality of the tea image and reduce the dependence on tea image acquisition hardware equipment and personnel.
In order to achieve the above object, the present invention adopts the following solutions: an image reconstruction method for automatically quantifying tea leaf curl, comprising the steps of:
acquiring an initial tea image, and calling an image reconstruction network; the image reconstruction network is sequentially provided with a characteristic enhancement module, a characteristic processing module and a characteristic building module; extracting features of the initial tea image based on the feature enhancement module to determine an indication feature map; performing feature extraction on the indication feature map based on the feature processing module to determine a matching feature map; reconstructing the matching feature map based on the feature building module to determine a target image; and the characteristic enhancement module is provided with an enhancement convolution layer and an enhancement cavity convolution layer, and the characteristic images generated after the convolution operation of the enhancement convolution layer and the enhancement cavity convolution layer are integrated to obtain the indication characteristic image.
Further, the process of extracting the characteristics of the initial tea image by the characteristic enhancement module comprises the following steps:
step one, performing convolution operation on the initial tea image by using an enhanced convolution layer to determine a first path feature map, and performing convolution operation on the initial tea image by using an enhanced cavity convolution layer to determine a second path feature map;
integrating the first path feature map with the second path feature map to generate a third path feature map;
step three, after the third path feature map is activated by the first enhancement activation function, a fourth path feature map is determined;
step four, the first path characteristic diagram and the second path characteristic diagram are subjected to difference, and a fifth path characteristic diagram is determined;
step five, performing global square difference pooling operation on the fifth path characteristic diagram in the channel direction to generate a first weight matrix;
step six, after the first weight matrix is activated by a second enhancement activation function, determining a second weight matrix;
and step seven, performing element corresponding product operation on the second weight matrix and the fourth path feature map, and determining the indication feature map.
Further, the convolution kernel size adopted by the enhanced convolution layer is 5*5, and the convolution kernel size adopted by the enhanced cavity convolution layer is 3*3 and the expansion rate is 2.
Further, the first path feature map and the second path feature map are added to generate a third path feature map.
Further, the first enhanced activation function is a PReLU function, and the second enhanced activation function is a sigmoid function.
Further, the second weight matrix is multiplied by the matching feature map in an element correspondence mode to obtain a fine-tuned matching feature map, and the fine-tuned matching feature map is reconstructed by the feature building module to determine a target image.
Further, the feature processing module comprises a plurality of complex feature extraction units connected in series, and the operation process inside the complex feature extraction units is expressed as the following mathematical model:
wherein,a feature map representing input of said complex feature extraction unit,/->And->Representing a first convolution module and a second convolution module, respectively,/->Representing a characteristic map determined based on said first convolution module,/or->Representing a feature map determined based on the second convolution module; the receptive field of the first convolution module is smaller than the receptive field of the second convolution module;representing the first parameter generation module->Representing a second parameter generation module->Representing element-corresponding product operation,/->Representing a first fusion module->Representing a second fusion module->And->Respectively representing feature graphs determined based on the first fusion module and the second fusion module; />Representing a third parameter generation module->Third parameter and +.>Characteristic diagram obtained after element corresponding product is made, < ->Representing a full connection layer, ">Representing an internal activation function->Expressing the weight vector output by the internal activation function; />Representing a fourth parameter generation module->And a feature map determined by the complex feature extraction unit is shown.
Further, at least one of the following conditions is satisfied:
condition one: the first parameter generation module comprises a first global pooling layer and a first branch activation function which are sequentially arranged, wherein the first global pooling layer is used for carrying out global maximum pooling on the feature map in the channel direction;
condition II: the second parameter generation module comprises a second global pooling layer and a second branch activation function which are sequentially arranged, and the second global pooling layer is used for carrying out global maximum pooling on the feature map in the channel direction;
and (3) a third condition: the third parameter generation module comprises a third global pooling layer and a third branch activation function which are sequentially arranged, and the third global pooling layer is used for carrying out global average pooling on the feature map in the space direction.
Further, the fourth parameter generation module internal calculation process includes:
splicing the first parameter output by the first parameter generation module with the second parameter output by the second parameter generation module to obtain a first process parameter;
the weight vectorPerforming element corresponding product operation with the first process parameter to obtain a second process parameter;
performing global maximum pooling operation on the second process parameters in the channel direction to obtain third process parameters;
and after the third process parameter is activated by a fourth branch activation function, a fourth parameter is obtained and is used as the output of the fourth parameter generation module.
The invention also provides an image reconstruction device for automatically quantifying the curl of tea leaves, comprising a processor and a memory, wherein the memory stores a computer program, and the processor is used for executing the method by loading the computer program.
The beneficial effects of the invention are as follows:
(1) According to the invention, a computer vision technology is adopted to reconstruct a tea image, the resolution of the tea image and the detail definition of edges and textures are enhanced, and tests show that after the image shot by a conventional smart phone is reconstructed and enhanced, a quantization result very close to that shot by professional equipment and professionals can be achieved, the cost of quantitative evaluation of tea quality is reduced, and the efficiency is improved;
(2) The invention creatively designs the feature enhancement module at the head of the network, uses common convolution and cavity convolution at the same time, inspects and learns the image feature information from different view angles, and realizes the deep blending of different information under the guidance of the second weight, so that the subsequent feature processing module can capture more space mapping and context mapping relations, and enhances the learning capability of the network on different types of image features;
(3) Because the acquired tea images have the characteristic of huge difference of visual characteristic expression of different spatial positions, the problem of cross-regional interference and spatial variation exists after the characteristics of different positions are combined in a conventional mode, and the follow-up quantization classification always has higher misjudgment rate; the present invention thus purposefully designs a multiple feature extraction unit in which not only is a dual cross feature integration mechanism provided (byModule, & gt>A modular implementation) and also makes use of +.>The simple transformation operation in the module constructs a bridge for information exchange in two dimensions of the space and the channel, and the dynamic exchange between information in different visual angles and different dimensions is realized along with the forward transmission of the information in the network; the flexible information transmission mode largely avoids rejection and interference among different features caused by image information rigidifying dislocation, and particularly, when the resolution of an image is improved by high-magnification reconstruction, the robustness of the model is more outstanding.
Drawings
FIG. 1 is a flow chart of an image reconstruction method provided by the invention;
Detailed Description
The invention is further described below with reference to the accompanying drawings:
example 1: the present embodiment provides an image reconstruction method for automatically quantifying the curl degree of tea, as shown in fig. 1, comprising the following steps:
acquiring an initial tea image, and calling an image reconstruction network; the image reconstruction network is sequentially provided with a characteristic enhancement module, a characteristic processing module and a characteristic building module;
extracting features of the initial tea image based on the feature enhancement module to determine an indication feature map; performing feature extraction on the indication feature map based on a feature processing module to determine a matching feature map; and reconstructing the matched feature map based on the feature construction module to determine a target image. The resolution of the target image is greater than the resolution of the initial tea image.
Specifically, in this embodiment, the process of feature extraction performed by the feature enhancement module on the initial tea image includes the following steps:
step one, performing convolution operation on an initial tea image by using an enhanced convolution layer to determine a first path feature map, and performing convolution operation on the initial tea image by using an enhanced cavity convolution layer to determine a second path feature map; the enhanced convolution layer is a conventional convolution operation layer, the adopted convolution kernel size is 5*5, the step length is 1, the adopted convolution kernel size of the enhanced cavity convolution layer is 3*3, the expansion rate is 2, and the step length is 1.
Step two, adding the first path characteristic diagram and the second path characteristic diagram to generate a third path characteristic diagram;
step three, after the third path feature map is activated by the first enhanced activation function (PReLU function), a fourth path feature map is determined;
step four, the first path characteristic diagram and the second path characteristic diagram are subjected to difference, and a fifth path characteristic diagram is determined;
step five, carrying out global variance pooling operation (respectively calculating variance values of all feature values on each spatial position of the fifth path feature map) on the fifth path feature map in the channel direction, and compressing the number of channels of the fifth path feature map to be 1 in the pooling process to generate a first weight matrix; the width and the height of the first weight matrix are respectively equal to those of the fifth path characteristic diagram;
step six, after the first weight matrix is activated by a second enhancement activation function (sigmoid function), determining a second weight matrix;
and step seven, performing element corresponding product operation on the second weight matrix and the fourth path feature map, and determining the indication feature map.
In some embodiments, the second weight matrix is multiplied by the matching feature map to obtain a trimmed matching feature map, and then the feature building module is used to reconstruct the trimmed matching feature map to determine the target image. Therefore, the network can fuse the imaged space details with the abstract semantic information, so that the remote dependency relationship of the image features can be captured better, the network has the capability of cross-level feature interaction and dynamic fine adjustment of the space information, and the learning capability of the model on various and complex tea image features is enhanced.
The feature processing module can be realized by adopting the existing feature extraction method or module capable of effectively extracting the image features. As an example, in the present embodiment, the feature processing module includes six complex feature extraction units connected in series, and the operation process inside the complex feature extraction units is expressed as the following mathematical model:
wherein,feature map representing input complex feature extraction unit, < ->And->Representing a first convolution module and a second convolution module, respectively, the first convolution module comprising a first conventional convolution layer (convolution kernel size 3*3, step size 1) and a first ReLU activation layer arranged in seriesThe second convolution module comprises a second conventional convolution layer (convolution kernel size 5*5, step size 1) and a second ReLU activation layer arranged in series, +.>Representing a characteristic map determined on the basis of a first convolution module,/->Representing a feature map determined based on the second convolution module.
Representing a first parameter generating module, the first parameter generating module being in the form of +.>Feature map as input, < >>A first parameter representing the output of the first parameter generating module; />Representing a second parameter generation module to generate the second parameterFeature map as input, < >>Representing the second parameter output by the second parameter generation module. />Representing element-corresponding product operation,/->Representing a first fusion module->Representing a second fusion module->And->Respectively representing the feature map determined based on the first fusion module and based on the second fusion module, +.>And->Are all equal to>The feature map is the same size. The first fusion module and the second fusion module may be implemented by using an existing method/module capable of fusing feature graphs, and specifically in this embodiment, the first fusion module includes a concatenation operation layer, a fusion convolution layer (the convolution kernel size is 3*3, the step size is 1), and a first fusion activation layer (a ReLU function) that are arranged in series. The internal operation process of the second fusion module is as follows: first will->And->Adding to obtain a semi-fusion feature map, and activating the semi-fusion feature map through a second fusion activation layer (tanh function) to obtain +.>And (5) a characteristic diagram.
Representing a third parameter generation module, which is characterized by a characteristic map +.>As input->A third parameter which represents the output of the third parameter generating module, wherein the third parameter is a vectorIts length and->The number of channels of the feature map is equal. />Third parameter and +.>Characteristic diagram obtained after element corresponding product is made, < ->Representing a fully connected layer which takes a third parameter as input and outputs a vector with length of 2, < >>Representing an internal activation function->Implementation of +.>The weight vector representing the internal activation function output has a length of 2.
The fourth parameter generation module is represented by the first parameter, the second parameter and +.>As input->Fourth parameter indicative of the output of the fourth parameter generating module,/->The feature map determined by the complex feature extraction means is shown. For each complex feature extraction unit, input +.>Feature map and output +.>The feature sizes are identical. Since several complex feature extraction units are arranged in series, the first complex feature extraction unit is a +_block having the indicative feature map as input>And (5) a characteristic diagram. In the subsequent calculation process, the previous complex feature extraction unit outputs +_>The feature map is input as the next complex feature extraction unit +.>And (5) a characteristic diagram. Up to the output of the last complex feature extraction unit>The feature map serves as a matching feature map. The matching feature map is also output as a feature processing module.
In some embodiments, the first parameter generation module includes a first global pooling layer and a first branch activation function (sigmoid function) sequentially arranged inside, where the first global pooling layer is used for mapping the feature mapGlobal max pooling in channel direction (computing +.respectively)>The maximum of all eigenvalues at each spatial location of the eigenvector diagram), the first parameter is a two-dimensional matrix. The second parameter generation module comprises a second global pooling layer and a second branch activation function (sigmoid function) which are sequentially arranged, wherein the second global pooling layer is used for carrying out global maximum pooling (respectively calculating) on the feature map in the channel directionThe maximum of all eigenvalues at each spatial position of the eigenvector), the second parameter is a two-dimensional matrix. The third parameter generation module comprises a third global pooling layer and a third branch activation function (sigmoid function) which are sequentially arranged, wherein the third global pooling layer is used for carrying out global average pooling (respectively calculating->The average of all eigenvalues on each channel of the profile), the third parameter is a one-dimensional vector.
The internal calculation process of the fourth parameter generation module comprises the following steps:
splicing the first parameter output by the first parameter generation module with the second parameter output by the second parameter generation module to obtain a first process parameter;
the weight vectorPerforming element corresponding product operation with the first process parameter to obtain a second process parameter;
performing global maximum pooling operation on the second process parameters in the channel direction to obtain a third process parameter;
after the third process parameter is activated by a fourth branch activation function (sigmoid function), a fourth parameter is obtained, the fourth parameter is a two-dimensional matrix, and the width and the height of the fourth parameter are respectively equal to those of the third parameterThe width and height of the feature map are equal.
The feature building module can be implemented by adopting the existing method/module capable of reconstructing the super-resolution of the image. As an example, the feature building block of the present embodiment internally includes a first basic convolution layer (a normal convolution operation layer with a convolution kernel of 3*3), a ReLU activation function, a sub-pixel convolution layer, and a second basic convolution layer (a normal convolution operation layer with a convolution kernel of 3*3) arranged in series.
The method provided by the invention and some existing models are adopted to carry out tea image reconstruction comparison experiments on the same test data set, and experimental data are shown in table 1. Peak signal-to-noise ratio and structural similarity are among other common indicators used to quantitatively evaluate the quality of reconstructed images.
Table 1: example 1 results of comparative experiments with existing models
Test results show that after the method provided by the invention is adopted to reconstruct and enhance the tea image shot by a conventional smart phone, the tea curl quantification accuracy rate can be improved by 14.2%, and after the method is adopted to reconstruct by other existing methods, the improvement range of the accuracy rate is 2.6% -6.9%, which is far lower than that of the method provided by the invention.
The foregoing examples merely illustrate specific embodiments of the invention, which are described in greater detail and are not to be construed as limiting the scope of the invention. It should be noted that it will be apparent to those skilled in the art that several variations and modifications can be made without departing from the spirit of the invention, which are all within the scope of the invention.

Claims (10)

1. An image reconstruction method for automatically quantifying the curling degree of tea is characterized by comprising the following steps: the method comprises the following steps:
acquiring an initial tea image, and calling an image reconstruction network; the image reconstruction network is sequentially provided with a characteristic enhancement module, a characteristic processing module and a characteristic building module;
extracting features of the initial tea image based on the feature enhancement module to determine an indication feature map; performing feature extraction on the indication feature map based on the feature processing module to determine a matching feature map; reconstructing the matching feature map based on the feature building module to determine a target image;
and the characteristic enhancement module is provided with an enhancement convolution layer and an enhancement cavity convolution layer, and the characteristic images generated after the convolution operation of the enhancement convolution layer and the enhancement cavity convolution layer are integrated to obtain the indication characteristic image.
2. The method according to claim 1, characterized in that: the process of extracting the characteristics of the initial tea image by the characteristic enhancement module comprises the following steps:
step one, performing convolution operation on the initial tea image by using an enhanced convolution layer to determine a first path feature map, and performing convolution operation on the initial tea image by using an enhanced cavity convolution layer to determine a second path feature map;
integrating the first path feature map with the second path feature map to generate a third path feature map;
step three, after the third path feature map is activated by the first enhancement activation function, a fourth path feature map is determined;
step four, the first path characteristic diagram and the second path characteristic diagram are subjected to difference, and a fifth path characteristic diagram is determined;
step five, performing global square difference pooling operation on the fifth path characteristic diagram in the channel direction to generate a first weight matrix;
step six, after the first weight matrix is activated by a second enhancement activation function, determining a second weight matrix;
and step seven, performing element corresponding product operation on the second weight matrix and the fourth path feature map, and determining the indication feature map.
3. The method according to claim 2, characterized in that: the convolution kernel size adopted by the enhanced convolution layer is 5*5, and the convolution kernel size adopted by the enhanced cavity convolution layer is 3*3 and the expansion rate is 2.
4. The method according to claim 2, characterized in that: and adding the first path characteristic diagram and the second path characteristic diagram to generate a third path characteristic diagram.
5. The method according to claim 2, characterized in that: the first enhanced activation function is a PReLU function, and the second enhanced activation function is a sigmoid function.
6. The method according to claim 2, characterized in that: and the second weight matrix is multiplied by the matching feature map in an element correspondence way to obtain a fine-tuned matching feature map, and then the fine-tuned matching feature map is reconstructed by the feature building module to determine a target image.
7. The method according to claim 1, characterized in that: the feature processing module comprises a plurality of compound feature extraction units connected in series, and the operation process inside the compound feature extraction units is expressed as the following mathematical model:
wherein,a feature map representing input of said complex feature extraction unit,/->And->Representing a first convolution module and a second convolution module, respectively,/->Representing a characteristic map determined based on said first convolution module,/or->Representing a feature map determined based on the second convolution module;
representing the first parameter generation module->Representing a second parameter generation module->The elements are represented as corresponding to the product operation,representing a first fusion module->Representing a second fusion module->And->Respectively representing feature graphs determined based on the first fusion module and the second fusion module;
representing a third parameter generation module->Third parameter and +.>Characteristic diagram obtained after element corresponding product is made, < ->Representing a full connection layer, ">Representing an internal activation function->Expressing the weight vector output by the internal activation function;
representing a fourth parameter generation module->And a feature map determined by the complex feature extraction unit is shown.
8. The method according to claim 7, characterized in that: at least one of the following conditions is satisfied:
condition one: the first parameter generation module comprises a first global pooling layer and a first branch activation function which are sequentially arranged, wherein the first global pooling layer is used for carrying out global maximum pooling on the feature map in the channel direction;
condition II: the second parameter generation module comprises a second global pooling layer and a second branch activation function which are sequentially arranged, and the second global pooling layer is used for carrying out global maximum pooling on the feature map in the channel direction;
and (3) a third condition: the third parameter generation module comprises a third global pooling layer and a third branch activation function which are sequentially arranged, and the third global pooling layer is used for carrying out global average pooling on the feature map in the space direction.
9. The method according to claim 8, characterized in that: the internal calculation process of the fourth parameter generation module comprises the following steps:
splicing the first parameter output by the first parameter generation module with the second parameter output by the second parameter generation module to obtain a first process parameter;
the weight vectorPerforming element-corresponding product operation with the first process parameter to obtain a second process parameterAn amount of;
performing global maximum pooling operation on the second process parameters in the channel direction to obtain third process parameters;
and after the third process parameter is activated by a fourth branch activation function, a fourth parameter is obtained and is used as the output of the fourth parameter generation module.
10. An image reconstruction device for automatically quantifying tea leaf curl, comprising a processor and a memory, the memory storing a computer program, characterized in that: the processor is configured to perform the method of any one of claims 1 to 9 by loading the computer program.
CN202410212090.4A 2024-02-27 2024-02-27 Image reconstruction method and device for automatically quantifying tea leaf curl Pending CN117788477A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202410212090.4A CN117788477A (en) 2024-02-27 2024-02-27 Image reconstruction method and device for automatically quantifying tea leaf curl

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202410212090.4A CN117788477A (en) 2024-02-27 2024-02-27 Image reconstruction method and device for automatically quantifying tea leaf curl

Publications (1)

Publication Number Publication Date
CN117788477A true CN117788477A (en) 2024-03-29

Family

ID=90380120

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202410212090.4A Pending CN117788477A (en) 2024-02-27 2024-02-27 Image reconstruction method and device for automatically quantifying tea leaf curl

Country Status (1)

Country Link
CN (1) CN117788477A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112991173A (en) * 2021-03-12 2021-06-18 西安电子科技大学 Single-frame image super-resolution reconstruction method based on dual-channel feature migration network
CN113362223A (en) * 2021-05-25 2021-09-07 重庆邮电大学 Image super-resolution reconstruction method based on attention mechanism and two-channel network
CN113781308A (en) * 2021-05-19 2021-12-10 马明才 Image super-resolution reconstruction method and device, storage medium and electronic equipment
US20220188982A1 (en) * 2019-09-27 2022-06-16 Shenzhen Sensetime Technology Co., Ltd. Image reconstruction method and device, electronic device, and storage medium
US20220351333A1 (en) * 2020-11-16 2022-11-03 Boe Technology Group Co., Ltd. Image reconstruction method, electronic device and computer-readable storage medium
WO2023273336A1 (en) * 2021-06-30 2023-01-05 之江实验室 Pet image region of interest enhanced reconstruction method based on multi-task learning constraint
CN117523275A (en) * 2023-11-06 2024-02-06 腾讯科技(深圳)有限公司 Attribute recognition method and attribute recognition model training method based on artificial intelligence

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20220188982A1 (en) * 2019-09-27 2022-06-16 Shenzhen Sensetime Technology Co., Ltd. Image reconstruction method and device, electronic device, and storage medium
US20220351333A1 (en) * 2020-11-16 2022-11-03 Boe Technology Group Co., Ltd. Image reconstruction method, electronic device and computer-readable storage medium
CN112991173A (en) * 2021-03-12 2021-06-18 西安电子科技大学 Single-frame image super-resolution reconstruction method based on dual-channel feature migration network
CN113781308A (en) * 2021-05-19 2021-12-10 马明才 Image super-resolution reconstruction method and device, storage medium and electronic equipment
CN113362223A (en) * 2021-05-25 2021-09-07 重庆邮电大学 Image super-resolution reconstruction method based on attention mechanism and two-channel network
WO2023273336A1 (en) * 2021-06-30 2023-01-05 之江实验室 Pet image region of interest enhanced reconstruction method based on multi-task learning constraint
CN117523275A (en) * 2023-11-06 2024-02-06 腾讯科技(深圳)有限公司 Attribute recognition method and attribute recognition model training method based on artificial intelligence

Similar Documents

Publication Publication Date Title
CN112949565B (en) Single-sample partially-shielded face recognition method and system based on attention mechanism
CN110363215B (en) Method for converting SAR image into optical image based on generating type countermeasure network
Pu et al. A fractional-order variational framework for retinex: fractional-order partial differential equation-based formulation for multi-scale nonlocal contrast enhancement with texture preserving
CN110648334A (en) Multi-feature cyclic convolution saliency target detection method based on attention mechanism
CN107507135B (en) Image reconstruction method based on coding aperture and target
CN112819910B (en) Hyperspectral image reconstruction method based on double-ghost attention machine mechanism network
CN111127316B (en) Single face image super-resolution method and system based on SNGAN network
CN113673590B (en) Rain removing method, system and medium based on multi-scale hourglass dense connection network
CN105426872B (en) A kind of facial age estimation method returned based on correlated Gaussian process
CN107767358B (en) Method and device for determining ambiguity of object in image
CN110148103A (en) EO-1 hyperion and Multispectral Image Fusion Methods, computer readable storage medium, electronic equipment based on combined optimization
CN111814768B (en) Image recognition method, device, medium and equipment based on AI composite model
CN113011253B (en) Facial expression recognition method, device, equipment and storage medium based on ResNeXt network
CN112017192A (en) Glandular cell image segmentation method and system based on improved U-Net network
CN113744136A (en) Image super-resolution reconstruction method and system based on channel constraint multi-feature fusion
CN111680579B (en) Remote sensing image classification method for self-adaptive weight multi-view measurement learning
CN115170410A (en) Image enhancement method and device integrating wavelet transformation and attention mechanism
CN115375548A (en) Super-resolution remote sensing image generation method, system, equipment and medium
CN114581318A (en) Low-illumination image enhancement method and system
Luo et al. A fast denoising fusion network using internal and external priors
CN117115632A (en) Underwater target detection method, device, equipment and medium
CN117788477A (en) Image reconstruction method and device for automatically quantifying tea leaf curl
Klinder et al. Lobar fissure detection using line enhancing filters
CN114565626A (en) Lung CT image segmentation algorithm based on PSPNet improvement
CN110570417B (en) Pulmonary nodule classification device and image processing equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination