CN109727246A - Comparative learning image quality evaluation method based on twin network - Google Patents

Comparative learning image quality evaluation method based on twin network Download PDF

Info

Publication number
CN109727246A
CN109727246A CN201910077607.2A CN201910077607A CN109727246A CN 109727246 A CN109727246 A CN 109727246A CN 201910077607 A CN201910077607 A CN 201910077607A CN 109727246 A CN109727246 A CN 109727246A
Authority
CN
China
Prior art keywords
image
quality
network
image block
block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910077607.2A
Other languages
Chinese (zh)
Other versions
CN109727246B (en
Inventor
牛玉贞
吴建斌
郭文忠
黄栋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fuzhou University
Original Assignee
Fuzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fuzhou University filed Critical Fuzhou University
Priority to CN201910077607.2A priority Critical patent/CN109727246B/en
Publication of CN109727246A publication Critical patent/CN109727246A/en
Application granted granted Critical
Publication of CN109727246B publication Critical patent/CN109727246B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention relates to a comparative learning image quality evaluation method based on a twin network. Firstly, performing local contrast normalization processing on an image to be trained, dividing the image into image blocks, and generating an image pair; secondly, designing a structure of the twin convolutional neural network, and training an image quality evaluation model by using the designed network; and finally, dividing the image to be detected into image blocks and generating an image pair. And predicting the quality of all generated images to be predicted by using the trained model to obtain the quality ranking of all the images, and obtaining the quality score of each image according to the ranking. The method of the invention converts the image quality evaluation problem into the quality comparison problem among image blocks, obtains the quality score of each image by counting the comparison result of each image and other images by using pairwise comparison among the image blocks, and can obviously improve the quality evaluation performance of the non-reference images.

Description

A kind of comparison study image quality measure method based on twin network
Technical field
The present invention relates to image and video processing and computer vision field, especially a kind of pairs based on twin network Than learning image quality measure method.
Background technique
Digital picture is particularly important in information technology highly popular today, but image is in daily use, than It such as often will appear distortion during the acquisition of image, compression and transmission.In order to preferably apply digital picture, image matter Amount evaluation becomes particularly important.With the development of convolutional neural networks, Many researchers start with convolutional neural networks come into Row non-reference picture quality appraisement.Had at present much based on convolutional neural networks without referring to image quality measure algorithm quilt It proposes.For example, the convolutional neural networks of shallow-layer are applied to without referring in picture appraisal by Kang et al., performance with being based in the past Feature extraction without referring to image quality evaluation model compared to there is certain promotion.The ResNet of Hui et al. proposition pre-training is mentioned Feature is taken, they are finely adjusted network without directly learning image quality evaluation score, to learn the general of distorted image Rate indicates.Bosse et al. proposes the non-reference picture method for evaluating quality based on depth convolutional neural networks, uses deeper volume Product neural network is trained, in addition, they, which also have adjusted network, can handle full reference image quality appraisement task, this A model be it is fast based on image, he does not consider that picture quality is unevenly distributed brought influence.Kim et al. is utilized The local score of full reference picture quality evaluation algorithm carries out pre-training to model as label, then subjective using picture appraisal Score is finely adjusted model, and performance depends on the performance of selected reference picture quality evaluation.Ma et al. is proposed using big For the image of amount to depth non-reference picture quality appraisement model is trained, the premise of algorithm is to need to know the distortion of distorted image Type and specified distortion level, however in the practical application without reference to image, type of distortion and level of distortion are difficult to obtain.
Using convolutional network train come the performance without reference mass assessment models than manual extraction feature method It improves much, but there are still challenges at present.One of challenge is a lack of training sample.Previously it is based on convolutional neural networks Without mainly this is solved the problems, such as by two methods referring to image quality measure method, first method is divided image Block, each image block uses the score of complete image as label, however the quality of the different piece of image has differences, Different masses are inaccurate using complete image to be labeled.Second method be using it is complete referring to method for evaluating quality come pair Image is labeled, and the defect of this method is that algorithm performance directly depends on the full performance referring to image quality evaluation.
Summary of the invention
The purpose of the present invention is to provide a kind of, and the comparison based on twin network learns image quality measure method, this method Be conducive to improve without referring to image quality measure performance.
To achieve the above object, the technical scheme is that a kind of comparison based on twin network learns picture quality Appraisal procedure includes the following steps:
Step S1, image to be trained is done into local contrast normalized, is then divided into image block, and generate figure As right;
Step S2, the structure of twin convolutional neural networks is designed, and uses designed network training image quality measure Model;
Step S3, testing image is divided into image block, and generates image pair;It is generated using trained model prediction All images pair to be predicted quality good or not, obtain the quality ranking of all images, the matter of every image obtained according to ranking Measure score.
In an embodiment of the present invention, the step S1 is implemented as follows:
Step S11, image to be trained first is done into local contrast normalized, gives intensity image I (i, j), meter Calculate normalized valueFormula it is as follows:
Wherein, C is constant, appearance the case where for preventing denominator from being zero;K and L is normalization window size, ωk,lIt is 2D Cyclic Symmetry gaussian weighing function;
Step S12: the image after all local contrast normalizeds is divided into the image of several h × w sizes Block is ranked up all image blocks using the standard deviation of each image block, takes n intermediate image block as training number According to;
Step S13, the image block come will be selected from all training images and carry out combination of two, to generate image It is right;Image includes the following to combined principle: 1) image block of same image is without combination;If 2) image block A and Image block B generates image pair, then B is no longer combined with A, to avoid data redundancy;3) the mass fraction difference between image pair It when more than a predetermined threshold, is just combined, otherwise without combination.
In an embodiment of the present invention, the step S2 is implemented as follows:
Step S21, the structure of a twin convolutional neural networks is designed, network is made of two sub- networks: sub-network I and son Network II;Sub-network I is made of two identical branched structures, and weight, each branch are shared between two branched structures Structure is made of N number of stacking convolutional coding structure, and the task of sub-network I is to extract the feature of two input picture blocks;Sub-network II It is made of M full articulamentums;The feature extracted by sub-network I is merged, using fused feature as sub-network The input of II, sub-network II differentiate two quality of input image quality according to fusion feature;
Step S22, twin convolutional neural networks are abstracted and are learnt to image information using N number of stacking convolution, then Characteristics of image is extracted by two full articulamentums, while being input to a sorter network and carrying out quality evaluation score Optimization Learning; The task of sorter network is to discriminate between out the quality good or not of two input picture blocks, i.e. the last output of the sorter network is two defeated The probability for entering image block quality good or not takes the corresponding image block quality that wherein probability is big to be better than the small corresponding image of probability Block;
Step S23, in the training stage, use cross entropy as loss function, formula is as follows:
Wherein, the quantity of N representative image pair;It is a two-dimensional vector, for indicating two figures As quality;It is also a bivector, indicates that than second picture quality of first image is good Probability, on the contrary, than first good probability of image of second image is
In an embodiment of the present invention, the step S3 is implemented as follows:
Step S31, testing image is first done into local contrast normalized, is then split into the image that size is h × w Block;All image blocks are ranked up using the standard deviation of each image block, ranking is taken to occupy n intermediate image block conduct Training data;
Step S32, image block is compared two-by-two, comparison rule it is as follows: 1) not with come from same testing image Image block comparison;2) each image block will be with other all image blocks pair in test set in addition to the image block of images themselves Than;
Step S33, the relative score of every image is obtained by the result of every image of statistics and other image comparisons, The calculation formula of the final image quality evaluation score of image A is as follows:
Wherein, PA,BIndicate image A and image B comparison as a result, PA,BThe mass ratio B of=1 representative image A is good, otherwise schemes As the mass ratio A of B is good;N represents the number that every image and other images compare, it is assumed that and test set is made of T test images, Every image is chosen n image block and is tested, then N=(T-1) × n;SAIndicate the score of this image.
Compared to the prior art, the invention has the following advantages: the present invention is suitable for a variety of type of distortion, different The image quality measure of distortion level, subjective evaluation score of the quality evaluation score being calculated close to people.This method will be to Trained image does local contrast normalized, is then divided into image block, and generate image pair;Design twin convolution mind Structure through network uses designed network training image quality measure model;Testing image is divided into image block, and raw At image pair.Using the quality good or not of trained model prediction all images pair to be predicted generated, all images are obtained Quality ranking, the mass fraction of every image is obtained according to ranking.The present invention comprehensively consider image quality evaluation score and Connection between type of distortion has stronger expression ability to the distortion information of image, can significantly improve without referring to picture quality Assess performance.
Detailed description of the invention
Fig. 1 is the implementation flow chart of the method for the present invention.
Fig. 2 is the structure chart of convolutional neural networks model in the embodiment of the present invention.
Specific embodiment
With reference to the accompanying drawing, technical solution of the present invention is specifically described.
The present invention provide it is a kind of based on twin network comparison study image quality measure method, as shown in Figure 1, include with Lower step:
Step S1: image to be trained is done into local contrast normalized, is then divided into image block, and generate figure As right.
Step S11: first doing local contrast normalized for all distorted images, gives intensity image I (i, j), meter Calculate normalized valueFormula it is as follows:
Wherein, C is constant, appearance the case where for preventing denominator from being zero;K and L is normalization window size, ωk,lIt is 2D Cyclic Symmetry gaussian weighing function;
Step S12: the image after all local contrast normalizeds is divided into the image of several h × w sizes Block is ranked up all image blocks using standard deviation (σ) value of each image block, we take n intermediate image block conduct Training data.
Step S13: the image block come will be selected from all training images and carries out combination of two, to generate image It is right.Image includes the following to combined principle: 1) image block that image block discord comes from same image is combined;2) If image block A and image block B generate image pair, B is no longer combined with A, to avoid data redundancy;3) since quality connects Quality difference is smaller between close image block pair, will increase the difficulty of comparison study, therefore the mass fraction between image pair is poor It is different be more than certain threshold value when, be just combined, otherwise without combination.
Step S2: designing the structure of twin convolutional neural networks, uses designed network training image quality measure mould Type.
Step S21: one twin network structure of design is used for training image Environmental Evaluation Model.Network is complete by two The twin network of same branches structure, each branched structure are made of 5 stacking convolutional coding structures and three full articulamentums, twin net Network is used for image quality measure.Liang Ge branch uses identical structure, the structure of one of branch, the first two stacking volume Convolutional layer 2 × 2 pond layer that then a step-length is 1 that product structure is all 3 × 3 by 2 convolution kernel sizes forms, and rear three A stacking convolutional coding structure is all convolutional layer 2 × 2 pond layer that then a step-length is 2 for being 3 × 3 by 3 convolution kernel sizes Composition.All convolutional layers be all used step-length be 1 and the mode without filling realize with guarantee convolutional layer input and The image size of output is consistent.5 stacking convolutional coding structures of the multitask deep layer convolutional network are by 13 convolutional layers and 5 Pond layer composition, all convolutional layers are all by convolution, batch standardization (Batch Normalization, BN) and ReLU is non-linear reflects Penetrate three parts composition.
Step S22: 5 stacking convolution of the twin Web vector graphic are abstracted and are learnt to image distortion information, are then led to It crosses two full articulamentums and extracts characteristics of image, while being input to a sorter network and carrying out quality evaluation score Optimization Learning.Point Class network includes one containing there are two the full articulamentum of node and softmax classification layer, and two nodes respectively correspond image pair Quality good or not, i.e. the last output of the sorter network probability that is two input picture block quality good or nots, take wherein probability it is big Corresponding image block quality be better than the small corresponding image block of probability.
Step S23: in the training stage, use cross entropy as loss function, formula is as follows:
Wherein, the quantity of N representative image pair;It is a two-dimensional vector, for indicating two figures As quality;It is also a bivector, indicates that than second picture quality of first image is good Probability, on the contrary, than first good probability of image of second image is
Step S3: testing image is divided into image block, and generates image pair.It is generated using trained model prediction All images pair to be predicted quality good or not, obtain the quality ranking of all images, the matter of every image obtained according to ranking Measure score.
Step S31: first doing local contrast normalized for all distorted images, and being then split into size is 64 × 64 Image block.All image blocks are ranked up using standard deviation (σ) value of each image block, we take ranking to occupy centre N image block is as training data.
Step S32: image block is compared two-by-two, and the rule of comparison is as follows: 1) image block discord comes from same distortion The image block of image compares;2) each image block will be with other all images in test set in addition to the image block of images themselves Block comparison;
Step S33: obtaining the relative score of every image by the result of every image of statistics and other image comparisons, The calculation formula of the final image quality evaluation score of image A is as follows:
Wherein, PA,BIndicate image A and image B comparison as a result, PA,BThe mass ratio B of=1 representative image A is good, otherwise schemes As the mass ratio A of B is good;N represents the number that every image and other images compare, it is assumed that and test set is made of T test images, Every image is chosen n image block and is tested, then N=(T-1) × n;SAIndicate the score of this image.In view of actually answering The seldom situation of test image is likely to occur in, we will provide test set image, only need to be by testing image and I when test The test chart that provides compare.At this moment, the value of N is N=T × n.
The above are preferred embodiments of the present invention, all any changes made according to the technical solution of the present invention, and generated function is made When with range without departing from technical solution of the present invention, all belong to the scope of protection of the present invention.

Claims (4)

1. a kind of comparison based on twin network learns image quality measure method, which comprises the steps of:
Step S1, image to be trained is done into local contrast normalized, is then divided into image block, and generate image It is right;
Step S2, the structure of twin convolutional neural networks is designed, and uses designed network training image quality measure model;
Step S3, testing image is divided into image block, and generates image pair;Utilize trained model prediction institute generated The quality good or not for needing forecast image pair obtains the quality ranking of all images, and the quality point of every image is obtained according to ranking Number.
2. a kind of comparison based on twin network according to claim 1 learns image quality measure method, feature exists In the step S1 is implemented as follows:
Step S11, image to be trained first is done into local contrast normalized, gives intensity image I (i, j), calculating is returned One change valueFormula it is as follows:
Wherein, C is constant, appearance the case where for preventing denominator from being zero;K and L is normalization window size, ωk,lIt is 2D circulation Symmetrical Gaussian weighting function;
Image after all local contrast normalizeds: being divided into the image block of several h × w sizes by step S12, benefit All image blocks are ranked up with the standard deviation of each image block, take n intermediate image block as training data;
Step S13, the image block come will be selected from all training images and carry out combination of two, to generate image pair;Figure As including the following to combined principle: 1) image block of same image is without combination;If 2) image block A and image block B generates image pair, then B is no longer combined with A, to avoid data redundancy;3) the mass fraction difference between image pair is more than one It when predetermined threshold, is just combined, otherwise without combination.
3. a kind of comparison based on twin network according to claim 1 learns image quality measure method, feature exists In the step S2 is implemented as follows:
Step S21, the structure of a twin convolutional neural networks is designed, network is made of two sub- networks: sub-network I and sub-network II;Sub-network I is made of two identical branched structures, and weight, each branched structure are shared between two branched structures It is made of N number of stacking convolutional coding structure, the task of sub-network I is to extract the feature of two input picture blocks;Sub-network II is by M Full articulamentum composition;The feature extracted by sub-network I is merged, using fused feature as the defeated of sub-network II Enter, sub-network II differentiates two quality of input image quality according to fusion feature;
Step S22, twin convolutional neural networks are abstracted and are learnt to image information using N number of stacking convolution, are then passed through Two full articulamentums extract characteristics of image, while being input to a sorter network and carrying out quality evaluation score Optimization Learning;Classification The task of network is to discriminate between out the quality good or not of two input picture blocks, i.e. the last output of the sorter network is two input figures As the probability of block quality good or not, the corresponding image block quality that wherein probability is big is taken to be better than the small corresponding image block of probability;
Step S23, in the training stage, use cross entropy as loss function, formula is as follows:
Wherein, the quantity of N representative image pair;It is a two-dimensional vector, for indicating two image matter Amount quality;It is also a bivector, indicates first image probability better than second picture quality, On the contrary, than first good probability of image of second image is
4. a kind of comparison based on twin network according to claim 1 learns image quality measure method, feature exists In the step S3 is implemented as follows:
Step S31, testing image is first done into local contrast normalized, is then split into the image block that size is h × w; All image blocks are ranked up using the standard deviation of each image block, takes ranking to occupy n intermediate image block and is used as instruction Practice data;
Step S32, image block is compared two-by-two, the rule of comparison is as follows: 1) not with the image that comes from same testing image Block comparison;2) each image block will be with other all image block comparisons in test set in addition to the image block of images themselves;
Step S33, the relative score of every image, image are obtained by the result of every image of statistics and other image comparisons The calculation formula of the final image quality evaluation score of A is as follows:
Wherein, PA,BIndicate image A and image B comparison as a result, PA,BThe mass ratio B of=1 representative image A is good, otherwise image B Mass ratio A is good;N represents the number that every image and other images compare, it is assumed that and test set is made of T test images, and every Image is chosen n image block and is tested, then N=(T-1) × n;SAIndicate the score of this image.
CN201910077607.2A 2019-01-26 2019-01-26 Comparative learning image quality evaluation method based on twin network Active CN109727246B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910077607.2A CN109727246B (en) 2019-01-26 2019-01-26 Comparative learning image quality evaluation method based on twin network

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910077607.2A CN109727246B (en) 2019-01-26 2019-01-26 Comparative learning image quality evaluation method based on twin network

Publications (2)

Publication Number Publication Date
CN109727246A true CN109727246A (en) 2019-05-07
CN109727246B CN109727246B (en) 2022-05-13

Family

ID=66300930

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910077607.2A Active CN109727246B (en) 2019-01-26 2019-01-26 Comparative learning image quality evaluation method based on twin network

Country Status (1)

Country Link
CN (1) CN109727246B (en)

Cited By (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110111326A (en) * 2019-05-15 2019-08-09 西安科技大学 Reconstructed image quality evaluation method based on ERT system
CN110210522A (en) * 2019-05-10 2019-09-06 无线生活(北京)信息技术有限公司 The training method and device of picture quality Fraction Model
CN110245625A (en) * 2019-06-19 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of field giant panda recognition methods and system based on twin neural network
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium
CN110781928A (en) * 2019-10-11 2020-02-11 西安工程大学 Image similarity learning method for extracting multi-resolution features of image
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment
CN111127435A (en) * 2019-12-25 2020-05-08 福州大学 No-reference image quality evaluation method based on double-current convolutional neural network
CN111583259A (en) * 2020-06-04 2020-08-25 南昌航空大学 Document image quality evaluation method
CN111640099A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Method and device for determining image quality, electronic equipment and storage medium
CN111709920A (en) * 2020-06-01 2020-09-25 深圳市深视创新科技有限公司 Template defect detection method
CN112163609A (en) * 2020-09-22 2021-01-01 武汉科技大学 Image block similarity calculation method based on deep learning
CN112613533A (en) * 2020-12-01 2021-04-06 南京南瑞信息通信科技有限公司 Image segmentation quality evaluation network system, method and system based on ordering constraint
CN112819015A (en) * 2021-02-04 2021-05-18 西南科技大学 Image quality evaluation method based on feature fusion
CN113554597A (en) * 2021-06-23 2021-10-26 清华大学 Image quality evaluation method and device based on electroencephalogram characteristics
CN116128798A (en) * 2022-11-17 2023-05-16 台州金泰精锻科技股份有限公司 Finish forging process for bell-shaped shell forged surface teeth
WO2023217117A1 (en) * 2022-05-13 2023-11-16 北京字跳网络技术有限公司 Image assessment method and apparatus, and device, storage medium and program product

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993250A (en) * 2017-09-12 2018-05-04 北京飞搜科技有限公司 A kind of fast multi-target pedestrian tracking and analysis method and its intelligent apparatus
CN108510485A (en) * 2018-03-27 2018-09-07 福州大学 It is a kind of based on convolutional neural networks without reference image method for evaluating quality
CN109215028A (en) * 2018-11-06 2019-01-15 福州大学 A kind of multiple-objection optimization image quality measure method based on convolutional neural networks

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107993250A (en) * 2017-09-12 2018-05-04 北京飞搜科技有限公司 A kind of fast multi-target pedestrian tracking and analysis method and its intelligent apparatus
CN108510485A (en) * 2018-03-27 2018-09-07 福州大学 It is a kind of based on convolutional neural networks without reference image method for evaluating quality
CN109215028A (en) * 2018-11-06 2019-01-15 福州大学 A kind of multiple-objection optimization image quality measure method based on convolutional neural networks

Cited By (24)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210522A (en) * 2019-05-10 2019-09-06 无线生活(北京)信息技术有限公司 The training method and device of picture quality Fraction Model
CN110111326B (en) * 2019-05-15 2021-01-15 西安科技大学 Reconstructed image quality evaluation method based on ERT system
CN110111326A (en) * 2019-05-15 2019-08-09 西安科技大学 Reconstructed image quality evaluation method based on ERT system
CN110245625A (en) * 2019-06-19 2019-09-17 山东浪潮人工智能研究院有限公司 A kind of field giant panda recognition methods and system based on twin neural network
CN110245625B (en) * 2019-06-19 2021-04-13 浪潮集团有限公司 Twin neural network-based wild panda identification method and system
CN110807757A (en) * 2019-08-14 2020-02-18 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment
CN110807757B (en) * 2019-08-14 2023-07-25 腾讯科技(深圳)有限公司 Image quality evaluation method and device based on artificial intelligence and computer equipment
CN110533097A (en) * 2019-08-27 2019-12-03 腾讯科技(深圳)有限公司 A kind of image definition recognition methods, device, electronic equipment and storage medium
CN110533097B (en) * 2019-08-27 2023-01-06 腾讯科技(深圳)有限公司 Image definition recognition method and device, electronic equipment and storage medium
CN110781928A (en) * 2019-10-11 2020-02-11 西安工程大学 Image similarity learning method for extracting multi-resolution features of image
CN111127435A (en) * 2019-12-25 2020-05-08 福州大学 No-reference image quality evaluation method based on double-current convolutional neural network
CN111640099A (en) * 2020-05-29 2020-09-08 北京金山云网络技术有限公司 Method and device for determining image quality, electronic equipment and storage medium
CN111709920A (en) * 2020-06-01 2020-09-25 深圳市深视创新科技有限公司 Template defect detection method
CN111583259B (en) * 2020-06-04 2022-07-22 南昌航空大学 Document image quality evaluation method
CN111583259A (en) * 2020-06-04 2020-08-25 南昌航空大学 Document image quality evaluation method
CN112163609A (en) * 2020-09-22 2021-01-01 武汉科技大学 Image block similarity calculation method based on deep learning
CN112613533B (en) * 2020-12-01 2022-08-09 南京南瑞信息通信科技有限公司 Image segmentation quality evaluation network system and method based on ordering constraint
CN112613533A (en) * 2020-12-01 2021-04-06 南京南瑞信息通信科技有限公司 Image segmentation quality evaluation network system, method and system based on ordering constraint
CN112819015A (en) * 2021-02-04 2021-05-18 西南科技大学 Image quality evaluation method based on feature fusion
CN113554597A (en) * 2021-06-23 2021-10-26 清华大学 Image quality evaluation method and device based on electroencephalogram characteristics
CN113554597B (en) * 2021-06-23 2024-02-02 清华大学 Image quality evaluation method and device based on electroencephalogram characteristics
WO2023217117A1 (en) * 2022-05-13 2023-11-16 北京字跳网络技术有限公司 Image assessment method and apparatus, and device, storage medium and program product
CN116128798A (en) * 2022-11-17 2023-05-16 台州金泰精锻科技股份有限公司 Finish forging process for bell-shaped shell forged surface teeth
CN116128798B (en) * 2022-11-17 2024-02-27 台州金泰精锻科技股份有限公司 Finish forging method for bell-shaped shell forging face teeth

Also Published As

Publication number Publication date
CN109727246B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN109727246A (en) Comparative learning image quality evaluation method based on twin network
CN110533631B (en) SAR image change detection method based on pyramid pooling twin network
CN108090902B (en) Non-reference image quality objective evaluation method based on multi-scale generation countermeasure network
Li et al. No-reference image quality assessment with deep convolutional neural networks
CN109086799A (en) A kind of crop leaf disease recognition method based on improvement convolutional neural networks model AlexNet
CN110135459B (en) Zero sample classification method based on double-triple depth measurement learning network
CN106127741B (en) Non-reference picture quality appraisement method based on improvement natural scene statistical model
CN109215028A (en) A kind of multiple-objection optimization image quality measure method based on convolutional neural networks
CN111079594B (en) Video action classification and identification method based on double-flow cooperative network
CN108256482A (en) A kind of face age estimation method that Distributed learning is carried out based on convolutional neural networks
CN110147745A (en) A kind of key frame of video detection method and device
CN111582397A (en) CNN-RNN image emotion analysis method based on attention mechanism
CN109961434A (en) Non-reference picture quality appraisement method towards the decaying of level semanteme
CN108960142B (en) Pedestrian re-identification method based on global feature loss function
CN106127234B (en) Non-reference picture quality appraisement method based on characteristics dictionary
CN111028203B (en) CNN blind image quality evaluation method based on significance
CN111429402A (en) Image quality evaluation method for fusing advanced visual perception features and depth features
CN113255895A (en) Graph neural network representation learning-based structure graph alignment method and multi-graph joint data mining method
CN108259893B (en) Virtual reality video quality evaluation method based on double-current convolutional neural network
CN114612714A (en) Curriculum learning-based non-reference image quality evaluation method
CN110674925A (en) No-reference VR video quality evaluation method based on 3D convolutional neural network
CN111008570B (en) Video understanding method based on compression-excitation pseudo-three-dimensional network
CN111144462A (en) Unknown individual identification method and device for radar signals
CN113411566A (en) No-reference video quality evaluation method based on deep learning
CN117516937A (en) Rolling bearing unknown fault detection method based on multi-mode feature fusion enhancement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant