CN112330666B - Image processing method, system, device and medium based on improved twin network - Google Patents
Image processing method, system, device and medium based on improved twin network Download PDFInfo
- Publication number
- CN112330666B CN112330666B CN202011347571.4A CN202011347571A CN112330666B CN 112330666 B CN112330666 B CN 112330666B CN 202011347571 A CN202011347571 A CN 202011347571A CN 112330666 B CN112330666 B CN 112330666B
- Authority
- CN
- China
- Prior art keywords
- image
- network
- definition
- fully
- sublayer
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 16
- 238000011156 evaluation Methods 0.000 claims abstract description 47
- 238000000034 method Methods 0.000 claims abstract description 25
- 238000012545 processing Methods 0.000 claims abstract description 21
- 238000007781 pre-processing Methods 0.000 claims abstract description 11
- 238000001914 filtration Methods 0.000 claims abstract description 6
- 239000013598 vector Substances 0.000 claims description 36
- 230000006870 function Effects 0.000 claims description 27
- 238000004590 computer program Methods 0.000 claims description 16
- 230000008569 process Effects 0.000 claims description 8
- 238000009499 grossing Methods 0.000 claims description 5
- 238000005516 engineering process Methods 0.000 abstract description 3
- 238000012986 modification Methods 0.000 description 5
- 230000004048 modification Effects 0.000 description 5
- 238000010586 diagram Methods 0.000 description 4
- 230000008859 change Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 238000010606 normalization Methods 0.000 description 2
- 241001156002 Anthonomus pomorum Species 0.000 description 1
- 230000004075 alteration Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000003247 decreasing effect Effects 0.000 description 1
- 238000013135 deep learning Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 238000000605 extraction Methods 0.000 description 1
- 238000005259 measurement Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000717 retained effect Effects 0.000 description 1
- 239000007787 solid Substances 0.000 description 1
- 230000003595 spectral effect Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30168—Image quality inspection
Abstract
The invention discloses an image processing method, a system, a device and a medium based on an improved twin network, relating to the field of image processing, wherein the method comprises the following steps: preprocessing original image to obtain image I1(ii) a Filtering image I1Obtaining image I from medium low frequency information2(ii) a Image I1And image I2Inputting an image definition judgment model, and outputting a definition evaluation predicted value and a definition judgment predicted value of the original image by the image definition judgment model; the invention combines the twin network and the image processing technology, and can simultaneously obtain the image definition evaluation predicted value and the image definition judgment predicted value on the premise of automatically extracting the image characteristics.
Description
Technical Field
The present invention relates to the field of image processing, and in particular, to an image processing method, system, device, and medium based on an improved twin network.
Background
The fuzzy degree of the image directly influences the identification and analysis of the image content, so that the fuzzy judgment of the image is the primary link of other follow-up tasks. Most of the existing image definition evaluation methods are established on the statistical information of image edges and overall information entropy. Generally, the more detail information of an image is retained, the sharper the gray change of the image is, and the sharper the image is considered. Based on this, the currently commonly used image definition evaluation functions are mainly divided into: gradient functions, spectral functions, and entropy functions.
In recent years, a deep learning technology based on supervised learning is continuously making new breakthrough in the field of computer vision.
The existing image processing technology can only obtain the sharpness evaluation index of the image, for example, the sharpness index of the image is calculated by using the traditional evaluation function, or the image sharpness or blur classification is given, and the sharpness evaluation prediction value and the image sharpness judgment prediction value of the image cannot be obtained at the same time.
Disclosure of Invention
The invention provides an image processing method, a system, a device and a medium based on an improved twin network.
To achieve the above object, the present invention provides an image processing method based on an improved twin network, the method comprising:
preprocessing original image to obtain image I1;
Filtering image I1Obtaining image I from medium low frequency information2;
Image I1And image I2Inputting an image definition judgment model, and outputting a definition evaluation predicted value and a definition judgment predicted value of the original image by the image definition judgment model;
wherein, the image definition judging model is used for judging the image I1And image I2The treatment process comprises the following steps:
a twin network is arranged in the image definition judgment model, a network 1 and a network 2 are arranged in the twin network, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters;
image I1The input network 1 obtains a feature vector u, an image I2Inputting the feature vector v into the network 2;
the fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a firstThree full connectivity sublayers fc3Second fully connected sublayer fc2The predicted value of sharpness evaluation of the output image, the third fully-connected sublayer fc3Outputting a definition judgment predicted value of the image; wherein at the first fully connected sublayer fc1In which the four vectors u, v, | u-v | and u × v are combined into one vector as the first fully connected sublayer fc1The input vector of (1).
The invention utilizes an improved twin network to automatically extract image characteristics and simultaneously obtain a definition evaluation predicted value of the image and a definition judgment predicted value of the image.
The main difference between the invention and the prior art is that:
1. reasonably constructing the input of the network, and combining the twin network with image definition analysis;
2. and simultaneously obtaining the image definition evaluation predicted value and the image definition judgment predicted value.
Preferably, the preprocessing of the original image in the method is specifically to normalize the size of the original image to a preset size. The size/length and width of the input image can be any size, and the scale normalization aims to process all image sizes into a uniform size and facilitate modeling.
Preferably, the method uses a low-pass filter to process the image I1Obtaining an image I2. The high-frequency components in the clear image are rich, and the detail information is more; the blurred image has fewer high frequency components and less detailed information, and is therefore blurred. The low-pass filter is used to filter low-frequency information to some extent, and the difference in the change of low-frequency information of the image before and after filtering is constructed.
Preferably, in the method, the first fully-connected sublayer fc1The input vector of (a) is:
fc1_input=concat(u,v,|u-v|,u×v)。
preferably, in the method, the loss function of the image sharpness determination model is as follows:
Loss=L1(m,mgt)+α×CE(n,ngt)
wherein m isgtRepresenting the true value, n, of the sharpness evaluation of the imagegtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; alpha represents a balance factor, m represents a definition evaluation prediction value of the image, and n represents a definition judgment prediction value of the image.
Preferably, in the method, the method further comprises calculating a sharpness evaluation true value of the image using a sharpness evaluation function.
The present invention also provides an image processing system based on an improved twin network, the system comprising:
a preprocessing unit for preprocessing the original image to obtain an image I1;
A smoothing unit for smoothing the image I1Obtaining an image I2;
Image definition judging model for processing input image I1And image I2Outputting a definition evaluation predicted value and a definition judgment predicted value of the original image;
wherein, the image definition judging model is used for judging the image I1And image I2The treatment process comprises the following steps:
a twin network is arranged in the image definition judgment model, a network 1 and a network 2 are arranged in the twin network, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters;
image I1The input network 1 obtains a feature vector u, an image I2Inputting the feature vector v into the network 2;
the fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a third fully connected sublayer fc3Second fully connected sublayer fc2Outputting a sharpness evaluation prediction value of the imageThree full connectivity sublayers fc3Outputting a definition judgment predicted value of the image; wherein at the first fully connected sublayer fc1In which the four vectors u, v, | u-v | and u × v are combined into one vector as the first fully connected sublayer fc1The input vector of (1).
Preferably, the first fully-connected sublayer fc in the present system1The input vector of (a) is:
fc1_input=concat(u,v,|u-v|,u×v);
the loss function of the image definition judgment model is as follows:
Loss=L1(m,mgt)+α×CE(n,ngt)
wherein m isgtRepresenting the true value, n, of the sharpness evaluation of the imagegtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; alpha represents a balance factor, m represents a definition evaluation prediction value of the image, and n represents a definition judgment prediction value of the image.
The invention also provides an image processing device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the improved twin network based image processing method when executing the computer program.
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the improved twin network based image processing method.
One or more technical schemes provided by the invention at least have the following technical effects or advantages:
by designing a multitask twin network, the prediction result simultaneously comprises an image definition evaluation prediction value and an image definition prediction value;
at the fully-connected layer of the network, the expression capacity of the network can be increased by considering two measurement functions of | u-v | and u x v.
Drawings
The accompanying drawings, which are included to provide a further understanding of the embodiments of the invention and are incorporated in and constitute a part of this specification, illustrate embodiments of the invention and together with the description serve to explain the principles of the invention;
FIG. 1 is a schematic structural diagram of an image sharpness determination model;
fig. 2 is a schematic composition diagram of an image processing system based on an improved twin network.
Detailed Description
In order that the above objects, features and advantages of the present invention can be more clearly understood, a more particular description of the invention will be rendered by reference to the appended drawings. It should be noted that the embodiments of the present invention and features of the embodiments may be combined with each other without conflicting with each other.
In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention, however, the present invention may be practiced in other ways than those specifically described and thus the scope of the present invention is not limited by the specific embodiments disclosed below.
It is understood that the terms "a" and "an" should be interpreted as meaning that a number of one element or element is one in one embodiment, while a number of other elements is one in another embodiment, and the terms "a" and "an" should not be interpreted as limiting the number.
Example one
The embodiment of the invention mainly aims at the fuzzy judgment and expansion of the image, and considers the practical situation: because the clear image contains a large amount of high-frequency information, the loss components are more after passing through the low-pass filter, and the obtained structural similarity is small; the opposite is true for a blurred image. The image before and after passing through the low-pass filter is used as input, an improved twin network is used for extracting image characteristics, and a definition evaluation predicted value of the image and a definition judgment predicted value of the image are obtained at the same time, the image definition judgment prediction method is realized through an image definition judgment model, and an attached figure 1 is a structural schematic diagram of the image definition judgment model, and the image definition judgment prediction method specifically comprises the following steps:
the size of the original image is normalized to the size of MxN,obtaining an image I1(ii) a The size/length and width of the input image can be any size, the dimension normalization aims to process all image sizes into a uniform size, modeling is convenient, the normalized size can be adjusted according to actual needs, and the method is not specifically limited.
Processing an image I with a low-pass filter1Obtaining a corresponding image I2(ii) a The high-frequency components in the clear image are rich, and the detail information is more; the blurred image has fewer high frequency components and less detailed information, and is therefore blurred. The low-pass filter is used to filter low-frequency information to some extent, and the difference in the change of low-frequency information of the image before and after filtering is constructed. In practical applications, other devices having functions similar to a low-pass filter or similar technical means may also be used for implementation, and the present invention is not limited in particular.
A twin network with shared parameters is designed, wherein the twin network is provided with a network 1 and a network 2, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters. Will I1Inputting into the network 1 to obtain a feature vector u, and inputting I2The input network 2 obtains a feature vector v.
The fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a third fully connected sublayer fc3Second fully connected sublayer fc2The predicted value of sharpness evaluation of the output image, the third fully-connected sublayer fc3And outputting the definition judgment predicted value of the image.
At the Fully Connected Layer (FC) of the twin network, not only the vectors u and v, but also the two metric functions | u-v | and u v are taken into account, where the four vectors are combined into one large vector as the first Fully Connected sublayer FC1Input vector of (2):
fc1_input=concat(u,v,|u-v|,u×v)
second full connection sublayer fc of full connection layer2And a third fully connected sublayer fc3Are two fully connected sublayers in parallel, the corresponding outputs are denoted m and n, where: m represents a prediction value for the sharpness evaluation of the image, and n represents a prediction value for the sharpness evaluation of the image. Constructing a multitask loss:
Loss=L1(m,mgt)+α×CE(n,ngt)
in the formula, mgtRepresenting the true value, n, of the sharpness evaluation of the imagegtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; α represents a balance factor.
The sharpness evaluation function used in this embodiment is as follows: tenengrad, Laplacian, SMD functions, Brenner functions, etc., and m can be calculated using these evaluation functionsgt. In practical application, other definition evaluation functions can be adopted for calculation, and the invention does not limit the specific definition evaluation function.
And outputting the image definition evaluation predicted value m and the image definition judgment predicted value n.
The method is described below with reference to specific data examples:
given an image, its size is normalized to M × N-224 × 224 size, resulting in image I1;
Using a Gaussian low pass filter pair I of size 7x71Processing to obtain image I2;
When a twin network is designed, a shared ResNet-18 network feature extraction structure is utilized, and an image definition evaluation function corresponding to m is calculated by adopting a Laplacian evaluation function to obtain mgt;
In the training phase, alpha in the Loss function is 1;
and outputting the image definition evaluation predicted value m and the image definition judgment predicted value n by the network.
Example two
Referring to fig. 2, fig. 2 is a schematic diagram of a modified twin network based image processing system, the system comprising:
a preprocessing unit for preprocessing the original image to obtainImage I1;
A smoothing unit for smoothing the image I1Obtaining an image I2;
Image definition judging model for processing input image I1And image I2Outputting a definition evaluation predicted value and a definition judgment predicted value of the original image;
wherein, the image definition judging model is used for judging the image I1And image I2The treatment process comprises the following steps:
a twin network is arranged in the image definition judgment model, a network 1 and a network 2 are arranged in the twin network, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters;
image I1The input network 1 obtains a feature vector u, an image I2Inputting the feature vector v into the network 2;
the fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a third fully connected sublayer fc3Second fully connected sublayer fc2The predicted value of sharpness evaluation of the output image, the third fully-connected sublayer fc3Outputting a definition judgment predicted value of the image; wherein at the first fully connected sublayer fc1In which the four vectors u, v, | u-v | and u × v are combined into one vector as the first fully connected sublayer fc1The input vector of (1).
Wherein, in the second embodiment of the present invention, the first full-connectivity sublayer fc in the system1The input vector of (a) is:
fc1_input=concat(u,v,|u-v|,u×v);
the loss function of the image definition judgment model is as follows:
Loss=L1(m,mgt)+α×CE(n,ngt)
wherein m isgtRepresenting the true value, n, of the sharpness evaluation of the imagegtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; alpha represents a balance factor, m represents a definition evaluation prediction value of the image, and n represents a definition judgment prediction value of the image.
EXAMPLE III
The invention also provides an image processing device, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the processor realizes the steps of the improved twin network based image processing method when executing the computer program.
The processor may be a Central Processing Unit (CPU), or other general-purpose processor, a digital signal processor (digital signal processor), an Application Specific Integrated Circuit (Application Specific Integrated Circuit), an off-the-shelf programmable gate array (field programmable gate array) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may be used to store the computer programs and/or modules, and the processor may implement various functions of the image processing apparatus of the present invention by operating or executing data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function (such as a sound playing function, an image playing function, etc.), and the like. Further, the memory may include high speed random access memory, and may also include non-volatile memory, such as a hard disk, a memory, a plug-in hard disk, a smart memory card, a secure digital card, a flash memory card, at least one magnetic disk storage device, a flash memory device, or other volatile solid state storage device.
Example four
The present invention also provides a computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the improved twin network based image processing method.
The image processing apparatus, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on such understanding, all or part of the flow in the method of implementing the embodiments of the present invention may also be stored in a computer readable storage medium through a computer program, and when the computer program is executed by a processor, the computer program may implement the steps of the above-described method embodiments. Wherein the computer program comprises computer program code, an object code form, an executable file or some intermediate form, etc. The computer readable medium may include: any entity or device capable of carrying said computer program code, a recording medium, a usb-disk, a removable hard disk, a magnetic disk, an optical disk, a computer memory, a read-only memory, a random access memory, a point carrier signal, a telecommunications signal, a software distribution medium, etc. It should be noted that the computer readable medium may contain content that is appropriately increased or decreased as required by legislation and patent practice in the jurisdiction.
While preferred embodiments of the present invention have been described, additional variations and modifications in those embodiments may occur to those skilled in the art once they learn of the basic inventive concepts. Therefore, it is intended that the appended claims be interpreted as including preferred embodiments and all such alterations and modifications as fall within the scope of the invention.
It will be apparent to those skilled in the art that various changes and modifications may be made in the present invention without departing from the spirit and scope of the invention. Thus, if such modifications and variations of the present invention fall within the scope of the claims of the present invention and their equivalents, the present invention is also intended to include such modifications and variations.
Claims (10)
1. An image processing method based on an improved twin network, the method comprising:
preprocessing original image to obtain image I1;
Filtering image I1Obtaining image I from medium low frequency information2;
Image I1And image I2Inputting an image definition judgment model, and outputting a definition evaluation predicted value and a definition judgment predicted value of the original image by the image definition judgment model;
wherein, the image definition judging model is used for judging the image I1And image I2The treatment process comprises the following steps:
a twin network is arranged in the image definition judgment model, a network 1 and a network 2 are arranged in the twin network, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters;
image I1The input network 1 obtains a feature vector u, an image I2Inputting the feature vector v into the network 2;
the fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a third fully connected sublayer fc3Second fully connected sublayer fc2The predicted value of sharpness evaluation of the output image, the third fully-connected sublayer fc3Outputting a definition judgment predicted value of the image; wherein at the first fully connected sublayer fc1In which the four vectors u, v, | u-v | and u × v are combined into one vector as the first fully connected sublayer fc1The input vector of (1).
2. Image processing method based on an improved twin network according to claim 1, characterised in that the first fully connected sublayer fc1The input vector of (a) is:
fc1_input=concat(u,v,|u-v|,u×v)。
3. the improved twin network based image processing method according to claim 1, wherein the loss function of the image sharpness determination model is:
Loss=L1(m,mgt)+α×CE(n,ngt)
wherein m isgtRepresenting sharpness to an imageDegree evaluation true value, ngtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; alpha represents a balance factor, m represents a definition evaluation prediction value of the image, and n represents a definition judgment prediction value of the image.
4. The improved twin network based image processing method as claimed in claim 1, further comprising calculating a sharpness evaluation true value of the image using a sharpness evaluation function.
5. The method of claim 1, wherein the pre-processing of the artwork is embodied as normalizing the size of the artwork to a preset size.
6. Image processing method based on an improved twin network according to claim 1, characterised in that the image I is processed with a low-pass filter1Obtaining an image I2。
7. An image processing system based on an improved twin network, the system comprising:
a preprocessing unit for preprocessing the original image to obtain an image I1;
A smoothing unit for filtering the image I1Obtaining image I from medium low frequency information2;
Image definition judging model for processing input image I1And image I2Outputting a definition evaluation predicted value and a definition judgment predicted value of the original image;
wherein, the image definition judging model is used for judging the image I1And image I2The treatment process comprises the following steps:
a twin network is arranged in the image definition judgment model, a network 1 and a network 2 are arranged in the twin network, the network 1 and the network 2 have the same structure, and the network 1 and the network 2 share parameters;
image I1The input network 1 obtains a feature vector u, an image I2Inputting the feature vector v into the network 2;
the fully-connected layer of the twin network comprises a first fully-connected sublayer fc1Second fully connected sublayer fc2And a third fully connected sublayer fc3First fully connected sublayer fc1Respectively input into the second fully-connected sublayer fc2And a third fully connected sublayer fc3Second fully connected sublayer fc2The predicted value of sharpness evaluation of the output image, the third fully-connected sublayer fc3Outputting a definition judgment predicted value of the image; wherein at the first fully connected sublayer fc1In which the four vectors u, v, | u-v | and u × v are combined into one vector as the first fully connected sublayer fc1The input vector of (1).
8. An improved twin network based image processing system as claimed in claim 7, characterised by a first fully connected sub-layer fc1The input vector of (a) is:
fc1_input=concat(u,v,|u-v|,u×v);
the loss function of the image definition judgment model is as follows:
Loss=L1(m,mgt)+α×CE(n,ngt)
wherein m isgtRepresenting the true value, n, of the sharpness evaluation of the imagegtRepresenting the definition judgment true value of the image; corresponding L1Represents L1Norm loss, CE represents cross entropy loss; alpha represents a balance factor, m represents a definition evaluation prediction value of the image, and n represents a definition judgment prediction value of the image.
9. An image processing apparatus comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the steps of the improved twin network based image processing method according to any one of claims 1 to 6 when executing the computer program.
10. A computer-readable storage medium, in which a computer program is stored, which, when being executed by a processor, carries out the steps of the method for image processing based on an improved twin network according to any one of claims 1 to 6.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011347571.4A CN112330666B (en) | 2020-11-26 | 2020-11-26 | Image processing method, system, device and medium based on improved twin network |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202011347571.4A CN112330666B (en) | 2020-11-26 | 2020-11-26 | Image processing method, system, device and medium based on improved twin network |
Publications (2)
Publication Number | Publication Date |
---|---|
CN112330666A CN112330666A (en) | 2021-02-05 |
CN112330666B true CN112330666B (en) | 2022-04-29 |
Family
ID=74308830
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202011347571.4A Active CN112330666B (en) | 2020-11-26 | 2020-11-26 | Image processing method, system, device and medium based on improved twin network |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN112330666B (en) |
Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102036077A (en) * | 2010-12-31 | 2011-04-27 | 太原理工大学 | Device for transmitting high-definition video in local area network (LAN) |
CN104318221A (en) * | 2014-11-05 | 2015-01-28 | 中南大学 | Facial expression recognition method based on ELM |
CN106355195A (en) * | 2016-08-22 | 2017-01-25 | 中国科学院深圳先进技术研究院 | The system and method used to measure image resolution value |
CN107358596A (en) * | 2017-04-11 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of car damage identification method based on image, device, electronic equipment and system |
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN108492275A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Based on deep neural network without with reference to stereo image quality evaluation method |
CN108492290A (en) * | 2018-03-19 | 2018-09-04 | 携程计算机技术(上海)有限公司 | Image evaluation method and system |
CN108573276A (en) * | 2018-03-12 | 2018-09-25 | 浙江大学 | A kind of change detecting method based on high-resolution remote sensing image |
CN108830197A (en) * | 2018-05-31 | 2018-11-16 | 平安医疗科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN108898579A (en) * | 2018-05-30 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device and storage medium |
CN108986075A (en) * | 2018-06-13 | 2018-12-11 | 浙江大华技术股份有限公司 | A kind of judgment method and device of preferred image |
CN110222792A (en) * | 2019-06-20 | 2019-09-10 | 杭州电子科技大学 | A kind of label defects detection algorithm based on twin network |
CN110533097A (en) * | 2019-08-27 | 2019-12-03 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device, electronic equipment and storage medium |
CN110533631A (en) * | 2019-07-15 | 2019-12-03 | 西安电子科技大学 | SAR image change detection based on the twin network of pyramid pondization |
CN111260594A (en) * | 2019-12-22 | 2020-06-09 | 天津大学 | Unsupervised multi-modal image fusion method |
CN111754474A (en) * | 2020-06-17 | 2020-10-09 | 上海眼控科技股份有限公司 | Visibility identification method and device based on image definition |
CN111798414A (en) * | 2020-06-12 | 2020-10-20 | 北京阅视智能技术有限责任公司 | Method, device and equipment for determining definition of microscopic image and storage medium |
Family Cites Families (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9965717B2 (en) * | 2015-11-13 | 2018-05-08 | Adobe Systems Incorporated | Learning image representation by distilling from multi-task networks |
US10740596B2 (en) * | 2016-11-08 | 2020-08-11 | Nec Corporation | Video security system using a Siamese reconstruction convolutional neural network for pose-invariant face recognition |
-
2020
- 2020-11-26 CN CN202011347571.4A patent/CN112330666B/en active Active
Patent Citations (17)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102036077A (en) * | 2010-12-31 | 2011-04-27 | 太原理工大学 | Device for transmitting high-definition video in local area network (LAN) |
CN104318221A (en) * | 2014-11-05 | 2015-01-28 | 中南大学 | Facial expression recognition method based on ELM |
CN106355195A (en) * | 2016-08-22 | 2017-01-25 | 中国科学院深圳先进技术研究院 | The system and method used to measure image resolution value |
CN107358596A (en) * | 2017-04-11 | 2017-11-17 | 阿里巴巴集团控股有限公司 | A kind of car damage identification method based on image, device, electronic equipment and system |
CN107506717A (en) * | 2017-08-17 | 2017-12-22 | 南京东方网信网络科技有限公司 | Without the face identification method based on depth conversion study in constraint scene |
CN108492275A (en) * | 2018-01-24 | 2018-09-04 | 浙江科技学院 | Based on deep neural network without with reference to stereo image quality evaluation method |
CN108573276A (en) * | 2018-03-12 | 2018-09-25 | 浙江大学 | A kind of change detecting method based on high-resolution remote sensing image |
CN108492290A (en) * | 2018-03-19 | 2018-09-04 | 携程计算机技术(上海)有限公司 | Image evaluation method and system |
CN108898579A (en) * | 2018-05-30 | 2018-11-27 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device and storage medium |
CN108830197A (en) * | 2018-05-31 | 2018-11-16 | 平安医疗科技有限公司 | Image processing method, device, computer equipment and storage medium |
CN108986075A (en) * | 2018-06-13 | 2018-12-11 | 浙江大华技术股份有限公司 | A kind of judgment method and device of preferred image |
CN110222792A (en) * | 2019-06-20 | 2019-09-10 | 杭州电子科技大学 | A kind of label defects detection algorithm based on twin network |
CN110533631A (en) * | 2019-07-15 | 2019-12-03 | 西安电子科技大学 | SAR image change detection based on the twin network of pyramid pondization |
CN110533097A (en) * | 2019-08-27 | 2019-12-03 | 腾讯科技(深圳)有限公司 | A kind of image definition recognition methods, device, electronic equipment and storage medium |
CN111260594A (en) * | 2019-12-22 | 2020-06-09 | 天津大学 | Unsupervised multi-modal image fusion method |
CN111798414A (en) * | 2020-06-12 | 2020-10-20 | 北京阅视智能技术有限责任公司 | Method, device and equipment for determining definition of microscopic image and storage medium |
CN111754474A (en) * | 2020-06-17 | 2020-10-09 | 上海眼控科技股份有限公司 | Visibility identification method and device based on image definition |
Non-Patent Citations (3)
Title |
---|
Towards high performance video object detection;Zhu X;《Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition》;20181231;第7210-7218页 * |
基于孪生卷积神经网络的图像融合;杨雪等;《计算机系统应用》;20200515(第05期);第198-203页 * |
基于对比度敏感度的无参考图像清晰度评价;范媛媛等;《光学精密工程》;20111015(第10期);第183-191页 * |
Also Published As
Publication number | Publication date |
---|---|
CN112330666A (en) | 2021-02-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110399929B (en) | Fundus image classification method, fundus image classification apparatus, and computer-readable storage medium | |
Wang et al. | Blur image identification with ensemble convolution neural networks | |
CN109543548A (en) | A kind of face identification method, device and storage medium | |
CN110023989B (en) | Sketch image generation method and device | |
CN111950723A (en) | Neural network model training method, image processing method, device and terminal equipment | |
Wang et al. | Multifocus image fusion using convolutional neural networks in the discrete wavelet transform domain | |
CN109785246B (en) | Noise reduction method, device and equipment for non-local mean filtering | |
CN110148117B (en) | Power equipment defect identification method and device based on power image and storage medium | |
CN110675334A (en) | Image enhancement method and device | |
CN109492640A (en) | Licence plate recognition method, device and computer readable storage medium | |
CN115631112B (en) | Building contour correction method and device based on deep learning | |
CN107908998A (en) | Quick Response Code coding/decoding method, device, terminal device and computer-readable recording medium | |
CN110796624B (en) | Image generation method and device and electronic equipment | |
Ding et al. | Smoothing identification for digital image forensics | |
Guan et al. | NCDCN: multi-focus image fusion via nest connection and dilated convolution network | |
CN111597845A (en) | Two-dimensional code detection method, device and equipment and readable storage medium | |
CN110717394A (en) | Training method and device of face recognition model, electronic equipment and storage medium | |
CN112330666B (en) | Image processing method, system, device and medium based on improved twin network | |
Tan et al. | Local context attention for salient object segmentation | |
CN111311610A (en) | Image segmentation method and terminal equipment | |
CN110728692A (en) | Image edge detection method based on Scharr operator improvement | |
CN115689947A (en) | Image sharpening method, system, electronic device and storage medium | |
CN111368602A (en) | Face image blurring degree evaluation method and device, readable storage medium and equipment | |
CN113205102B (en) | Vehicle mark identification method based on memristor neural network | |
CN107945137A (en) | Method for detecting human face, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: 610042 No. 270, floor 2, No. 8, Jinxiu street, Wuhou District, Chengdu, Sichuan Applicant after: Chengdu shuzhilian Technology Co.,Ltd. Address before: No.2, floor 4, building 1, Jule road crossing, Section 1, West 1st ring road, Wuhou District, Chengdu City, Sichuan Province 610041 Applicant before: CHENGDU SHUZHILIAN TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |