CN110351548A - Stereo image quality evaluation method based on deep learning and disparity map weighting guidance - Google Patents
Stereo image quality evaluation method based on deep learning and disparity map weighting guidance Download PDFInfo
- Publication number
- CN110351548A CN110351548A CN201910568557.8A CN201910568557A CN110351548A CN 110351548 A CN110351548 A CN 110351548A CN 201910568557 A CN201910568557 A CN 201910568557A CN 110351548 A CN110351548 A CN 110351548A
- Authority
- CN
- China
- Prior art keywords
- image
- branch
- stereo
- anaglyph
- blending
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000013441 quality evaluation Methods 0.000 title claims abstract description 28
- 238000000034 method Methods 0.000 title claims abstract description 27
- 238000013135 deep learning Methods 0.000 title claims abstract description 16
- 238000002156 mixing Methods 0.000 claims abstract description 52
- 230000000007 visual effect Effects 0.000 claims abstract description 13
- 238000013528 artificial neural network Methods 0.000 claims abstract description 12
- 238000012937 correction Methods 0.000 claims abstract description 10
- 230000001537 neural effect Effects 0.000 claims abstract description 4
- 238000000605 extraction Methods 0.000 claims description 14
- 239000000284 extract Substances 0.000 claims description 9
- 230000004438 eyesight Effects 0.000 description 7
- 230000006870 function Effects 0.000 description 6
- 230000007246 mechanism Effects 0.000 description 6
- 238000011156 evaluation Methods 0.000 description 5
- 238000002474 experimental method Methods 0.000 description 5
- 230000006835 compression Effects 0.000 description 4
- 238000007906 compression Methods 0.000 description 4
- 238000013461 design Methods 0.000 description 4
- 230000004927 fusion Effects 0.000 description 4
- 238000013527 convolutional neural network Methods 0.000 description 3
- 239000000654 additive Substances 0.000 description 2
- 230000000996 additive effect Effects 0.000 description 2
- 238000013459 approach Methods 0.000 description 2
- 230000008901 benefit Effects 0.000 description 2
- 210000004556 brain Anatomy 0.000 description 2
- 238000011161 development Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 238000005562 fading Methods 0.000 description 2
- 230000008569 process Effects 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000011160 research Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 238000004458 analytical method Methods 0.000 description 1
- 238000013473 artificial intelligence Methods 0.000 description 1
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004364 calculation method Methods 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 230000005284 excitation Effects 0.000 description 1
- 238000010801 machine learning Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 230000008447 perception Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000012549 training Methods 0.000 description 1
- 238000012795 verification Methods 0.000 description 1
- 230000016776 visual perception Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N17/00—Diagnosis, testing or measuring for television systems or their details
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Health & Medical Sciences (AREA)
- Biomedical Technology (AREA)
- General Health & Medical Sciences (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Artificial Intelligence (AREA)
- Data Mining & Analysis (AREA)
- Evolutionary Computation (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- Biophysics (AREA)
- General Physics & Mathematics (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
The present invention discloses the stereo image quality evaluation method of a kind of deep learning and disparity map weighting guidance, include the following steps: S1, double branch neural networks are built by left and right independent in stereo-picture visual point image, double branch network neurals include blending image branch and anaglyph branch;S2, image feature information is extracted respectively to the blending image branch and anaglyph branch;S3, by introducing SE module for the first time, the anaglyph branch and characteristics of image in blending image branch are weighted, and then completion is to the correction of characteristics of image in blending image branch;This method can more accurately carry out prediction of quality, improve the efficiency of stereo image quality appraisal.
Description
Technical field
The invention belongs to field of image processings, are related to application of the deep learning in stereo image quality evaluation;Especially
It is related to the stereo image quality evaluation method based on deep learning and disparity map weighting guidance
Background technique
In recent years, it with the continuous development of 3D technology, is increasingly taken seriously to the research of stereo-picture.Due to perspective view
As that may generate certain distortion in transmission process, the quality of stereo-picture will be affected, and as a result just directly reflect
To people to the visual perception of stereo-picture.Therefore, how effectively to evaluate stereo image quality and have become stereo-picture processing
One of with the critical issue of computer vision field.Based on such status, the present invention proposes a kind of based on deep learning and parallax
The stereo image quality evaluation model of figure weighting guidance.
Stereo image quality evaluation algorithms existing at present can be divided into three kinds according to the degree of dependence to reference picture: complete
With reference to, half with reference to and without reference.Wherein, the evaluation algorithms of full reference mode utilize the structure between reference picture and distorted image
Similitude or other indexs to carry out distorted image prediction of quality, and the evaluation algorithms of half reference mode are not necessarily to known reference image
The complete information of pixel scale, it is lower to the degree of dependence of reference picture.And the quality evaluation algorithm without reference is carrying out image
When mass fraction is predicted, final Score on Prediction is can be obtained in the information without obtaining reference picture.In practical applications, no mistake
The acquisition of true reference picture is usually relatively difficult, therefore more for the research of no reference stereo image quality evaluation algorithms
It attracts attention.
Usually, the method for no reference stereo image quality evaluation can be divided into three classifications: the method for feature extraction
The method [3-4] of [1-2], rarefaction representation and the method [5-8] of deep learning.Wherein the method based on feature extraction is usually adopted
The extraction of certain statistical natures is carried out to stereo-picture with traditional approach, recycles machine learning algorithm to make mass fraction pre-
It surveys.Method based on rarefaction representation carries out rarefaction representation usually using the method for establishing dictionary, to statistical nature, and such method exists
Terms of the computation complexity occupies some superiority.The above two classes method is all based on feature of the algorithm to stereo-picture of mankind's design
Extract, but since the understanding to human visual system or natural statistical nature is not enough, algorithm using it is upper by
Certain limitation.And the rapid development of artificial intelligence is relied on, the method based on deep learning appears in perspective view in succession in recent years
As quality evaluation field in, due to the method based on deep learning by neural network replace traditional approach to stereo-picture
Feature extracts, and eliminates the artificial limitation for extracting feature, can usually show more excellent performance.
The binocular vision mechanism of design inspiration of the invention based on the mankind, i.e. binocular fusion in brain and binocular compete machine
System, blending image is higher compared to independent left and right visual point image and binocular vision mechanism correlation, therefore selects blending image
Input as one branch of network.Some information can be accordingly lost when left and right visual point image is merged, therefore select parallax
Figure compensates blending image, i.e., using disparity map as the input of another network branches.Further, since utilizing convolutional Neural net
The feature that network is extracted from blending image has different significance levels, and being weighted using feature of the different weights to extraction is
It is necessary, therefore we select the extruding and the expression ability of excitation module (SE module) Lai Tigao network of application enhancements, wherein
An input of the disparity map as SE module, is instructed and is weighted to the characteristic pattern that blending image branch obtains, to realize
Blending image characteristic pattern is re-calibrated.Since blending image branch and disparity map branch have one to image quality estimation
Fixed contribution, therefore be finally attached Liang Ge branch, to obtain final prediction score.
The invention proposes a kind of stereo image quality evaluation models instructed based on deep learning and disparity map weighting.It is first
The characteristics of when first, for mankind's viewing stereo-picture, independent binocular visual point image is carried out merging to obtain blending image,
Disparity map is obtained using disparity correspondence algorithm, using blending image and disparity map as the input of neural network Liang Ge branch,
Feature learning is carried out by convolutional neural networks.Secondly, the feature based on blending image has the fact that different significance levels, benefit
The feature for using disparity map to extract re-calibrates the characteristic pattern of blending image as the input for improving SE module.
Summary of the invention
It is of the existing technology in order to solve the problems, such as, the present invention is directed to using mankind's binocular vision mechanism as design basis, and
And the fact that different significance levels, is had based on the feature extracted by neural network, it establishes a kind of effectively and reasonably based on depth
The stereo image quality evaluation model of study and disparity map weighting guidance.Such stereo image quality evaluation model is carrying out quality
It is more accurate when prediction, and without relying on original reference image, it can replace subjective evaluation result to a certain extent, improve three-dimensional
The efficiency of image quality evaluation work, and certain basis can be established for follow-up work.
Aiming at the problems existing in the prior art, the present invention adopts the following technical scheme that progress:
A kind of stereo image quality evaluation method based on deep learning and disparity map weighting guidance, includes the following steps:
S1, double branch neural networks, double branch network neurals are built by left and right independent in stereo-picture visual point image
Including blending image branch and anaglyph branch;
S2, the extraction for carrying out the first stage respectively to the characteristics of image of the blending image branch and anaglyph branch;
S3, by introducing SE module for the first time, by characteristics of image in the anaglyph branch and blending image branch into
Row weighted calculation, and then complete the correction to characteristics of image in blending image branch;
S4, the feature of anaglyph branch first stage extraction and corrected blending image branching characteristic are carried out into one
Step is extracted, that is, completes the feature extraction of second stage;
S5, introduce SE module by second, the image feature information that second stage in anaglyph branch is extracted with
The feature that blending image branch extracts after correction is weighted, and completes the correction of second stage;
S6, the feature that two branches finally extract is attached and then completes the quality evaluation of stereo-picture.
It to the weighted correction of blending image characteristic pattern is realized by modified SE module in the step S3 and S5;It is described
Amendment in the structure basis of original SE module, introduce a new input, i.e., using the characteristic pattern of anaglyph branch as
An additional input for correcting SE module, the weight to correct blending image branching characteristic figure learn.
Beneficial effect
Proposed by the invention is based on binocular vision mechanism with the intensive convolutional neural networks of biserial for improving SE module
It is designed, and in view of there is the fact that different significance levels by the feature that convolutional neural networks extract, and then applies
Effective means is weighted different characteristic, and experimental result shows method proposed in the present invention in the quality of stereo-picture
Evaluation aspect has excellent performance.
Stereo image quality evaluation model based on deep learning and disparity map weighting guidance of the invention is disclosed vertical
Volumetric image data is tested on library, and mass fraction predicted value obtained in experiment and standard subjective assessment value are very close,
The degree of correlation and stability are superior to most of stereo image quality evaluation algorithms at present.
Detailed description of the invention
Fig. 1 present invention uses the general frame of network;
Fig. 2 is SE function structure chart of the present invention;
Fig. 3 is 3 layers of intensive function structure chart of the invention.
Specific embodiment
The present invention is tested on disclosed stereoscopic image data library (LIVE).At stereoscopic image data library (LIVE)
It include the two sseparated databases of phase I and phase II, stereo-picture is with the flat image of left and right viewpoint in database
Common to present, size is 360 × 640.Wherein phase I includes 20 reference pictures pair and 365 distorted images pair, figure altogether
As predominantly symmetrical distortion, the i.e. distortion level of left and right visual point image is approximately equal.And phase II includes 8 reference pictures altogether
Pair and 360 distorted images pair, wherein not only comprising symmetrical distortion but also include asymmetric type of distortion image, asymmetric distortion
The distortion level of image or so visual point image differs greatly.It include five kinds of different type of distortion: Gaussian mode in LIVE database
Paste, Jp2k compression artefacts, jpeg compression artefacts, Rayleigh fast-fading and additive white Gaussian noise.
Below with reference to technical method process in detail.
The present invention is using mankind's binocular vision mechanism as design basis, i.e., there are binocular fusions for perception of the brain to stereo-picture
With binocular competition mechanism, and the fact that different significance levels, is had based on the feature extracted by neural network, proposes a kind of base
In deep learning and the stereo image quality evaluation model of disparity map weighting guidance.First by special algorithm by independent left and right
Visual point image obtains blending image and disparity map respectively, builds biserial neural network basic framework.Then improved SE mould is added
Block is weighted guidance using the feature that the feature that disparity map branching networks extract extracts blending image branching networks, makes
The training of blending image branching networks is more efficient.Finally two branching networks are connected to complete to stereo-picture matter
The final prediction of amount.Its detailed process is as shown in Figure 1.
Specific step is as follows:
1, biserial neural network framework:
Biserial neural network framework of the present invention is using blending image and disparity map as two branching networks
Input, blending image and disparity map are to be obtained by the left and right visual point image from same width stereo-picture by specific algorithm
It takes.Wherein, the acquisition of blending image is to meet binocular competition, binocular fusion and vision multichannel based on binocular fusion model
The characteristics of.The acquisition of disparity map is obtained based on Stereo Matching Algorithm.In addition, basic thought is adopted when building the network architecture
With three layers of intensive link block, the back-propagating ability of feature can be enhanced and promote feature reuse.As shown in Fig. 1,
Two branching networks include two convolution modules and two three layers of intensive link blocks, include one in one of convolution module
A block normalizes layer (BN), convolutional layer, ReLU activation primitive and pond layer, includes two in one three layers intensive link block
Convolutional layer.Wherein, first convolution module of Liang Ge branch and first three layers intensive link block are realized to blending image
And the feature extraction of anaglyph first stage, second convolution module and second three layers intensive link block are then accomplished that
Feature extraction to blending image and anaglyph second stage.
2, disparity map feature re-calibrates blending image characteristic pattern:
In view of the feature extracted by neural network has the fact that different significance levels, selection is using SE module to image
Different characteristic be weighted guidance.It introduces SE module twice altogether in the present invention, is in blending image and disparity map for the first time
As complete the first stage feature extraction after, second be two branching networks complete second stage feature extraction it
Afterwards.Shown in original SE modular structure such as Fig. 2 (a), it is corrected instead of using blending image characteristic pattern itself, we improve original
The SE module of beginning, shown in specific structure such as Fig. 2 (b), that is, the feature for using disparity map branching networks to extract as SE module one
A input is instructed and is weighted to blending image characteristic pattern, to complete to re-calibrate characteristic pattern.Wherein, blue is empty
Operation in wire frame is known as the channel SE, and the channel SE includes a global poolization operation, and specific formula is indicated such as (1), a reduction
The factor is the full articulamentum of r, the full articulamentum that a ReLU unit and amplification factor are r.Finally, to the spy of blending image
Sign figure generates the weight between 0,1 using S type function.
Wherein, H × W is characterized the size of figure, and it is the value at (x, y) that f (x, y), which is characterized coordinate in figure,.
3, the final prediction of stereo-picture score:
Blending image branching networks and disparity map branching networks respectively learn the feature of stereo-picture, to quality
Prediction all has certain contribution.Disparity map branching networks provide certain compensation for blending image branching networks, and the two combines
Higher reliability is provided to the prediction of mass fraction.Therefore in the end of neural network, we are by blending image branched network
Network and disparity map branching networks ' Concat ' channel be connected by way of be attached, that is, complete disparity map to blending image
Compensating action.The final prediction of mass fraction is then carried out using full link block, the structure of full link block is similar to volume
Volume module, difference are in full link block to be full articulamentum rather than convolutional layer.We use Euclid's function as net
The loss function of network, formula are as follows:
When network is trained, keeps loss function minimum by Back Propagation Algorithm, optimal network can be trained
Parameter.
4, stereo image quality evaluation result and analysis
Experiment of the invention is carried out on disclosed stereoscopic image data library (LIVE).In stereoscopic image data library
It (LIVE) include the two sseparated databases of phase I and phase II, stereo-picture is with the flat image of left and right viewpoint in
Common to present, size is 360 × 640.Wherein phase I includes 20 reference pictures pair and 365 distorted images pair, figure altogether
As predominantly symmetrical distortion, the i.e. distortion level of left and right visual point image is approximately equal.And phase II includes 8 reference pictures altogether
Pair and 360 distorted images pair, wherein not only comprising symmetrical distortion but also include asymmetric type of distortion image, asymmetric distortion
The distortion level of image or so visual point image differ greatly.It include five kinds of different distortions in stereoscopic image data library (LIVE)
Type: Gaussian Blur, Jp2k compression artefacts, jpeg compression artefacts, Rayleigh fast-fading and additive white Gaussian noise.
The method of the present invention has carried out experimental verification at stereoscopic image data library (LIVE), and table 1 illustrates experiment of the invention
As a result, wherein also including the experimental result of other 12 kinds existing three-dimensional quality evaluation algorithms of good performance, by corresponding right
Than result as can be seen that the performance of stereo image quality evaluation algorithms proposed by the invention is better than most of existing perspective view
As quality evaluation algorithm.
Performance of the table 1 on LIVE database
Table 2 lists the experimental result of the different lower three kinds of evaluation indexes of type of distortion, it is therefore apparent that it is proposed that method
It shows on phase I excellent, though not showing best performance on phase II, still better than some algorithm, thus may be used
See, our algorithm can adapt to the stereo-picture of different type of distortion, and the prediction of precise and high efficiency is made to mass fraction.
The performance of the different type of distortion on LIVE database of table 2
In order to further prove the performance advantage of proposed method, We conducted corresponding comparative experimentss, as a result such as
Shown in table 3, blending image branching networks are only applied wherein 1. representing, blending image characteristic pattern is adjusted by its own, 2.
It represents and adds disparity map branching networks on the basis of 1., but disparity map feature is not engaged in the finger to converged network characteristic pattern
It leads, is only attached in network end-point, 3. represents the weight that disparity map branching networks are only involved in blending image branching networks characteristic pattern
New correction work, and do not combined with blending image branching networks.This hair it can be seen from the experimental result given by table 3
Bright the proposed stereo image quality evaluation model based on deep learning and disparity map weighting guidance realizes superior performance.
3 contrast and experiment of table
It should be pointed out that for those of ordinary skill in the art, without departing from the inventive concept of the premise,
Various modifications and improvements can be made, and these are all within the scope of protection of the present invention.Therefore, the scope of protection of the patent of the present invention
It should be determined by the appended claims.
Claims (2)
1. the stereo image quality evaluation method based on deep learning and disparity map weighting guidance, which is characterized in that including as follows
Step:
S1, double branch neural networks are built by left and right independent in stereo-picture visual point image, double branch network neurals include
Blending image branch and anaglyph branch;
S2, the extraction for carrying out the first stage respectively to the characteristics of image of the blending image branch and anaglyph branch;
S3, SE module is introduced by first time, characteristics of image in the anaglyph branch and blending image branch is added
Power calculates, and then completes the correction to characteristics of image in blending image branch;
S4, the feature of anaglyph branch first stage extraction and corrected blending image branching characteristic are further mentioned
It takes, that is, completes the feature extraction of second stage;
S5, SE module is introduced by second, the image feature information that second stage in anaglyph branch is extracted and correction
The feature that blending image branch extracts afterwards is weighted, and completes the correction of second stage;
S6, the feature that two branches finally extract is attached and then completes the quality evaluation of stereo-picture.
2. the stereo image quality evaluation method according to claim 1 based on deep learning and disparity map weighting guidance,
It is characterized in that, to the weighted correction of blending image characteristic pattern being realized by modified SE module in the step S3 and S5;
In the structure basis of original SE module, a new input is introduced, i.e., using the characteristic pattern of anaglyph branch as amendment SE
One additional input of module, the weight to correct blending image branching characteristic figure learn.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568557.8A CN110351548B (en) | 2019-06-27 | 2019-06-27 | Stereo image quality evaluation method guided by deep learning and disparity map weighting |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910568557.8A CN110351548B (en) | 2019-06-27 | 2019-06-27 | Stereo image quality evaluation method guided by deep learning and disparity map weighting |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110351548A true CN110351548A (en) | 2019-10-18 |
CN110351548B CN110351548B (en) | 2020-12-11 |
Family
ID=68176883
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910568557.8A Expired - Fee Related CN110351548B (en) | 2019-06-27 | 2019-06-27 | Stereo image quality evaluation method guided by deep learning and disparity map weighting |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110351548B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110944165A (en) * | 2019-11-13 | 2020-03-31 | 宁波大学 | Stereoscopic image visual comfort level improving method combining perceived depth quality |
CN111667058A (en) * | 2020-06-23 | 2020-09-15 | 新疆爱华盈通信息技术有限公司 | Dynamic selection method of multi-scale characteristic channel of convolutional neural network |
CN111950655A (en) * | 2020-08-25 | 2020-11-17 | 福州大学 | Image aesthetic quality evaluation method based on multi-domain knowledge driving |
CN113810676A (en) * | 2020-06-16 | 2021-12-17 | 佳能株式会社 | Image processing apparatus, method, system, medium, and method of manufacturing learning model |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120008855A1 (en) * | 2010-07-08 | 2012-01-12 | Ryusuke Hirai | Stereoscopic image generation apparatus and method |
KR20150037668A (en) * | 2013-09-30 | 2015-04-08 | 시스벨 테크놀로지 에스.알.엘. | Method and device for edge shape enforcement for visual enhancement of depth image based rendering of a three-dimensional video stream |
CN109345502A (en) * | 2018-08-06 | 2019-02-15 | 浙江大学 | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction |
CN109714592A (en) * | 2019-01-31 | 2019-05-03 | 天津大学 | Stereo image quality evaluation method based on binocular fusion network |
-
2019
- 2019-06-27 CN CN201910568557.8A patent/CN110351548B/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120008855A1 (en) * | 2010-07-08 | 2012-01-12 | Ryusuke Hirai | Stereoscopic image generation apparatus and method |
KR20150037668A (en) * | 2013-09-30 | 2015-04-08 | 시스벨 테크놀로지 에스.알.엘. | Method and device for edge shape enforcement for visual enhancement of depth image based rendering of a three-dimensional video stream |
CN109345502A (en) * | 2018-08-06 | 2019-02-15 | 浙江大学 | A kind of stereo image quality evaluation method based on disparity map stereochemical structure information extraction |
CN109714592A (en) * | 2019-01-31 | 2019-05-03 | 天津大学 | Stereo image quality evaluation method based on binocular fusion network |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110944165A (en) * | 2019-11-13 | 2020-03-31 | 宁波大学 | Stereoscopic image visual comfort level improving method combining perceived depth quality |
CN113810676A (en) * | 2020-06-16 | 2021-12-17 | 佳能株式会社 | Image processing apparatus, method, system, medium, and method of manufacturing learning model |
CN111667058A (en) * | 2020-06-23 | 2020-09-15 | 新疆爱华盈通信息技术有限公司 | Dynamic selection method of multi-scale characteristic channel of convolutional neural network |
CN111950655A (en) * | 2020-08-25 | 2020-11-17 | 福州大学 | Image aesthetic quality evaluation method based on multi-domain knowledge driving |
CN111950655B (en) * | 2020-08-25 | 2022-06-14 | 福州大学 | Image aesthetic quality evaluation method based on multi-domain knowledge driving |
Also Published As
Publication number | Publication date |
---|---|
CN110351548B (en) | 2020-12-11 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110351548A (en) | Stereo image quality evaluation method based on deep learning and disparity map weighting guidance | |
CN107767413A (en) | A kind of image depth estimation method based on convolutional neural networks | |
CN110555434B (en) | Method for detecting visual saliency of three-dimensional image through local contrast and global guidance | |
CN103236082B (en) | Towards the accurate three-dimensional rebuilding method of two-dimensional video of catching static scene | |
CN109167996B (en) | It is a kind of based on convolutional neural networks without reference stereo image quality evaluation method | |
CN111402311B (en) | Knowledge distillation-based lightweight stereo parallax estimation method | |
CN110310317A (en) | A method of the monocular vision scene depth estimation based on deep learning | |
CN111988593B (en) | Three-dimensional image color correction method and system based on depth residual optimization | |
CN111047709B (en) | Binocular vision naked eye 3D image generation method | |
CN110175986A (en) | A kind of stereo-picture vision significance detection method based on convolutional neural networks | |
CN109831664B (en) | Rapid compressed stereo video quality evaluation method based on deep learning | |
CN109523513A (en) | Based on the sparse stereo image quality evaluation method for rebuilding color fusion image | |
CN108648264A (en) | Underwater scene method for reconstructing based on exercise recovery and storage medium | |
CN104240248B (en) | Method for objectively evaluating quality of three-dimensional image without reference | |
CN105407349A (en) | No-reference objective three-dimensional image quality evaluation method based on binocular visual perception | |
CN108520510B (en) | No-reference stereo image quality evaluation method based on overall and local analysis | |
CN103136748B (en) | The objective evaluation method for quality of stereo images of a kind of feature based figure | |
CN113507627B (en) | Video generation method and device, electronic equipment and storage medium | |
CN115914505A (en) | Video generation method and system based on voice-driven digital human model | |
CN111915589A (en) | Stereo image quality evaluation method based on hole convolution | |
CN117095128A (en) | Priori-free multi-view human body clothes editing method | |
CN106023152B (en) | It is a kind of without with reference to objective evaluation method for quality of stereo images | |
CN105898279B (en) | A kind of objective evaluation method for quality of stereo images | |
CN109218706B (en) | Method for generating stereoscopic vision image from single image | |
CN105488792B (en) | Based on dictionary learning and machine learning without referring to stereo image quality evaluation method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant | ||
CF01 | Termination of patent right due to non-payment of annual fee |
Granted publication date: 20201211 |