CN107483920A - A kind of panoramic video appraisal procedure and system based on multi-layer quality factor - Google Patents

A kind of panoramic video appraisal procedure and system based on multi-layer quality factor Download PDF

Info

Publication number
CN107483920A
CN107483920A CN201710683578.5A CN201710683578A CN107483920A CN 107483920 A CN107483920 A CN 107483920A CN 201710683578 A CN201710683578 A CN 201710683578A CN 107483920 A CN107483920 A CN 107483920A
Authority
CN
China
Prior art keywords
msubsup
video
quality factor
interest
matrix
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201710683578.5A
Other languages
Chinese (zh)
Other versions
CN107483920B (en
Inventor
王晶
杨舒
费泽松
张博
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Institute of Technology BIT
Original Assignee
Beijing Institute of Technology BIT
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Institute of Technology BIT filed Critical Beijing Institute of Technology BIT
Priority to CN201710683578.5A priority Critical patent/CN107483920B/en
Publication of CN107483920A publication Critical patent/CN107483920A/en
Application granted granted Critical
Publication of CN107483920B publication Critical patent/CN107483920B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N2017/008Diagnosis, testing or measuring for television systems or their details for television teletext

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • General Health & Medical Sciences (AREA)
  • Image Processing (AREA)
  • Compression Or Coding Systems Of Tv Signals (AREA)

Abstract

The present invention relates to a kind of panoramic video appraisal procedure and system based on multi-layer quality factor, belong to multimedia technology field.The present invention will input the damage video of one section of lossless panoramic video and one section of same content, the quality assessment result of output damage video, realize the automatic assessment to damaging video;Its thought is that the area-of-interest based on multi-layer calculates multiple quality factors, the problem of being had a great influence with the important area tackled in panoramic video to video quality;Then multi-layer quality factor is merged by Fusion Model, model parameter can be learnt to obtain by subjective data, to tackle the subjective initiative of panoramic video user.This method is applied to panoramic video quality evaluation:Due to considering and having merged influence of the user's area-of-interest of multi-layer to video quality, the quality evaluation valuation of obtained damage video and the evaluation result of subjective experiment are more consistent, are more suitable for the automatic Evaluation to panoramic video quality.

Description

A kind of panoramic video appraisal procedure and system based on multi-layer quality factor
Technical field
It is more particularly to a kind of based on the complete of multi-layer quality factor the present invention relates to a kind of panoramic video method for evaluating quality Scape video evaluations method and system, belong to multimedia technology field.
Background technology
With the development of virtual reality (Virtual Reality, abbreviation VR) technology, common planar video just gradual quilt 360 degree of panoramic videos are substituted.Panoramic video refers to provide 360 degree of horizontal extent, vertical model for a fixed point of observation The video that 180 degree is free to navigate through is enclosed, the stronger feeling of immersion of VR user and experience more on the spot in person can be given.With this The popularization of kind of Novel multimedia business, the development of the user experience quality of panoramic video for key technology in virtual reality system And the optimization of transmission network is significant.However, panoramic video quality evaluation is a challenge, because compared to Common plane video, the experience of panoramic video beholder can be affected by various factors, including more psychology and physiology because Influence, the influence of the subjective factor such as area-of-interest of element.Traditional video quality evaluation method can not accurately reflect aphorama The quality of frequency.Research, which is applied to the development and popularization of the appraisal procedure and system of panoramic video for VR technologies, has important meaning Justice.
The content of the invention
The purpose of the present invention is that the panoramic video quality in virtual reality system is assessed, and proposes that one kind is based on multilayer The panoramic video appraisal procedure and system of level quality factor, the system input one section of lossless panoramic video and one section of same content Video is damaged, the quality assessment result of output damage video, realizes the automatic assessment to damaging video.
Idea of the invention is that the area-of-interest based on multi-layer calculates multiple quality factors, to tackle in panoramic video Important area the problem of being had a great influence to video quality;Then multi-layer quality factor is merged by Fusion Model, Model parameter can be learnt to obtain by subjective data, to tackle the subjective initiative of panoramic video user.
The purpose of the present invention solves by the following technical programs:A kind of panoramic video based on multi-layer quality factor Appraisal procedure and system include a kind of the panoramic video appraisal procedure based on multi-layer quality factor and one kind and are based on multi-layer matter The panoramic video of the amount factor comments system.
Wherein, a kind of panoramic video appraisal procedure based on multi-layer quality factor, abbreviation this method;One kind is based on multilayer The panoramic video of level quality factor comments system, abbreviation the system.
The system include panoramic video input module, region of interesting extraction module, multi-layer quality factor computing module, Time domain processing module and multi-layer quality factor Fusion Module.
The annexation of each module of the system is as follows:
Panoramic video input module is connected with region of interesting extraction module, region of interesting extraction module and multi-layer matter Amount factor computing module is connected;Multi-layer quality factor computing module is connected with time domain processing module;Time domain processing module and more Level quality factor Fusion Module is connected.
The function of each module of the system is as follows:
The function of panoramic video input module is that the video file of input is decoded to obtain panorama two field picture pair;Feel emerging The function of interesting region extraction module is to extract the multi-layer region of interest domain matrix of panoramic picture;Multi-layer quality factor calculates mould The function of block is the quality factor according to area-of-interest matrix computations panoramic picture;The function of time domain processing module is according to complete The quality factor of scape image calculates the quality factor of panoramic video;The function of multi-layer quality factor Fusion Module is by aphorama The quality factor of frequency is merged to obtain the automatic assessment result of damage video.
A kind of panoramic video appraisal procedure based on multi-layer quality factor, comprises the following steps:
Step 1:Panoramic video input module to input the system a pair of panoramic video source files carry out Video processing and Decoding process, obtain panorama two field picture pair;
Wherein, the panoramic video in a pair of panoramic video source files of input be one section lossless reference video S ' and one section and Reference video content identical damages video S, and what the damage damaged in video S included being artificially introduced obscures plus make an uproar and encode Based on processing caused by damage, also including in network transmission process as packet loss and error code for it is main the reason for caused by damage;
Wherein, lossless reference video is also referred to as reference video;
Step 1.1 judge input the system a pair of panoramic video source files whether have identical resolution ratio, frame per second and Duration, and identical mapping format, including the mapping based on the mapping of longitude and latitude figure, hexahedron mapping and rectangular pyramid mapping, and Corresponding operating is carried out according to judged result:
If a pair of panoramic video source files of 1.1A input the system have identical resolution ratio, frame per second and duration, and Identical mapping format, then skip to step 1.2;
A pair of panoramic video source files that 1.1B inputs the system do not have identical resolution ratio, frame per second and duration, and Identical mapping format, then damage video is carried out with picture element interpolation, duplication two field picture, mapping to become in panoramic video input module It is changed to main Video processing so that damage video has identical resolution ratio, frame per second and duration, and identical with reference video Mapping format;
Step 1.2 uses the decoding tool based on ffmpeg, according to a pair of panoramic video source files of input the system Coded format, carry out decoding process, each panoramic video is decoded as multiple image, so as to obtain panorama two field picture pair, its In, the video frame number of panoramic video source file is N, and obtained panorama two field picture is to for N groups, including the N obtained by reference video Individual reference frame image and the N number of damage two field picture obtained by damage video, the width of each panorama two field picture and high respectively W and H;
Step 2:Region of interesting extraction module, step 1 is exported using image procossing and computer vision algorithms make Panorama two field picture exports multi-layer area-of-interest set of matrices to carrying out region of interesting extraction;
Specifically:Reference frame image I ' carry out the area-of-interests of the panoramic frame image pair of step 1 output, i.e. ROI are carried Take;
Wherein, multi-layer area-of-interest set of matrices is low-level area-of-interest set of matrices Middle level area-of-interest set of matricesHigh-level area-of-interest set of matrices Time domain level area-of-interest set of matricesAnd mapping level area-of-interest matrix MpIn it is all Set of matrices, wherein M represent the two-dimensional matrix that size is H × W, i.e. image I ' a region of interest domain matrix, the element in M Span is [0,1], and M (i, j) value is that the numerical value that the i-th row of matrix jth arranges is bigger, represents to correspond to position in reference frame image I ' The pixel I ' (i, j) that puts is easier noticed by beholder it is bigger to the influence degree of video quality;M subscript l, m, H, t, p represent that the matrix is obtained by the area-of-interest exacting method of basic, normal, high, time domain and mapping level respectively, M's Superscript 1,2 ... n represents that the matrix is obtained by the n method of place level, wherein nl,nm,nh,ntTake and be more than or equal to 1 integer, it is emerging that expression is basic, normal, high, time domain level can obtain one or more senses using one or more kinds of methods Interesting matrix of areas, and map level only can obtain region of interest domain matrix from a kind of method;
The explanation for region of interest domain matrix number is to be directed to a reference frame image I ' above, is exported for step 1 N group panoramic frame image pairs N number of reference frame image, step 2 output region of interest domain matrix number be (nl+nm+nh+ nt+1)×N;
Multi-layer region of interest domain matrix is produced by step 2.1 to step 2.5 respectively, is specially:
Step 2.1 calculates the low-level area-of-interest of reference frame image, output using pixel scale image processing method Low-level area-of-interest set of matrices
Wherein, pixel scale image processing method is based on color contrast and rim detection;
Step 2.2 calculates the middle level area-of-interest of reference frame image, level in output using super-pixel processing method Area-of-interest set of matrices
Wherein, super-pixel processing method is based on the sequence of super-pixel block conspicuousness;
Step 2.3 calculates the high-level area-of-interest of reference frame image using computer vision methods, usually watches The region based on people, animal and vehicle that person easily pays close attention to, export high-level area-of-interest set of matrices
Wherein, computer vision methods are based on Target Segmentation and semantic segmentation;
Step 2.4 calculates temporal levels area-of-interest using image processing method using adjacent two frames reference picture, leads to The moving object often easily paid close attention to for beholder, output time-domain level area-of-interest set of matrices
Wherein, image processing method is based on light stream estimation and estimation;
Step 2.5 is according to the mapping format of a pair of panoramic video source files of input the system, weight square corresponding to selection Battle array, output weight matrix is as mapping level area-of-interest matrix Mp
It is for longitude and latitude figure mapping format, and weight ratio equator, the two poles of the earth weight of corresponding weight matrix is small, rectangular pyramid mapping The bottom surface weight ratio conical surface weight of weight matrix corresponding to form is big;
Wherein, the mapping level area-of-interest matrix that step 2.5 exports is only relevant with video mapping format, with two field picture Itself is unrelated, once it is determined that the video mapping format of input, then the region of interest domain matrix of each frame is identical;
Step 3:Multi-layer quality factor computing module, using quality evaluation algorithm, the multi-layer based on step 2 output Area-of-interest set of matrices, the weighted difference for the panorama two field picture pair that calculation procedure one exports, export the more of N framing images pair Level quality factor set;
Wherein, the set of multi-layer quality factor is low-level quality factor setMiddle level quality Factor setHigh-level quality factor setTime domain level quality factor collection CloseAnd mapping level quality factor set fpIn all numerical value set, wherein f represent one be more than 0 Natural number, subscript is consistent with the upper subscript of the M in step 2 thereon, represents the quality factor by corresponding region of interest Domain matrix obtains the processing procedure and specifically completed by following steps:
The panorama two field picture pair that step 3.1 exports step 1, and the basic, normal, high of step 2 output, time domain and mapping Region of interest domain matrix, N groups are obtained according to the order of frame
Each group includes:One lossless panorama sketch, a width damage panorama sketch and multilayer Level area-of-interest set of matrices;
Step 3.2 calculates lossless and damages the mass discrepancy matrix D of panorama sketch using pixel difference appraisal procedure, D be H × W two-dimensional matrix, D (i, j) represent color/luminance difference that is lossless and damaging in panorama sketch (i, j) opening position pixel, can made It is calculated with Euclidean distance method;
Each region of interest domain matrix M is multiplied by step 3.3 with difference matrix D corresponding elements, the difference square weighted Battle array set
Step 3.4 uses traditional images objective quality assessment method by the difference matrix compound mapping of weighting for damage image Multi-layer quality factor set
Wherein, traditional images objective quality assessment method is based on MSE, PSNR and SSIM;
Step 4:Time domain processing module, the N group image multi-layer quality factor set that input step three obtains, according to when Domain processing method, fusion turn into one group, output video S multi-layer quality factor set
Wherein, time-domain processing method is based on average and weighted average;
Step 5:Multi-layer quality factor Fusion Module, the multi-layer quality factor that input step four obtains, using fusion Model Fusion is a quality evaluation result
Export result Q, i.e. video S matter Measure evaluation result;
Wherein, Fusion Model is based on linear regression, nonlinear regression and neural network model;
The parameter of the Fusion Model can be obtained by Experience Design, can also train to obtain by way of machine learning, Wherein the method based on machine learning can mainly be completed by following steps:A BP neural network structure is designed first, then Train to obtain the parameter of BP networks using training data so that the result of these quality factors fusion obtains close to subjective Point;
Wherein, the quality score for some panoramic videos that the training data utilized obtains specifically by subjective experiment, and The video quality factor obtained by step 1 to step 4;
So far, by step 1 to step 5, this method, i.e., a kind of aphorama based on multi-layer quality factor are completed Frequency appraisal procedure.
Beneficial effect
The present invention a kind of panoramic video appraisal procedure and system based on multi-layer quality factor, compared with prior art, Have the advantages that:
This method is applied to panoramic video quality evaluation:With existing ordinary video method for evaluating quality, and existing panorama Video quality evaluation method is compared, and method of the invention is due to considering and having merged user's area-of-interest of multi-layer to video The influence of quality, the quality evaluation valuation of obtained damage video and the evaluation result of subjective experiment are more consistent, are more suitable for pair The automatic Evaluation of panoramic video quality.
Brief description of the drawings
Fig. 1 is a kind of module map of the panoramic video quality evaluation system based on multi-layer quality factor of the present invention;
Fig. 2 is in the present invention a kind of panoramic video appraisal procedure and system specific embodiment based on multi-layer quality factor The 5th frame panoramic picture and its multi-layer area-of-interest figure;
Fig. 3 is in the present invention a kind of panoramic video appraisal procedure and system specific embodiment based on multi-layer quality factor Multi-layer quality factor Fusion Module structure chart.
Embodiment
The present invention is described in detail below in conjunction with drawings and examples, while also describes technical solution of the present invention The technical problem and beneficial effect of solution, it should be pointed out that described embodiment is intended merely to facilitate the understanding of the present invention, And any restriction effect is not played to it.
Embodiment 1
The present embodiment is to the lossless panoramic video of the method for the invention and system based on two sections of 4K resolution ratio Concert.mp4 and damaging is illustrated exemplified by panoramic video concert_3M.mp4.
Fig. 1 is that the present invention a kind of panoramic video appraisal procedure and system based on multi-layer quality factor are based on multi-layer matter Measure the panoramic video quality evaluation system module map of the factor.
From figure 1 it appears that the system is solved reference video and damage video input panoramic video input module Code processing, it is then fed into region of interesting extraction module, extraction low-level, middle level, high-level, time domain level and mapping layer Level area-of-interest, is next based on these region of interest domain matrixs, panorama sketch is calculated in multi-layer quality factor computing module As pair low-level, middle level, high-level, time domain level and mapping level quality factor set, then by these quality because Son is sent into time domain processing module, obtains the multi-layer quality factor set of panoramic video, is finally merged in multi-layer quality factor These quality factors are permeated the output of quality score in module, that is, damage an automatic assessment result for video.
Using the system relied on it is a kind of based on the panoramic video appraisal procedure of multi-layer quality factor to the present embodiment In two sections of 4K resolution ratio lossless panoramic video concert.mp4 and damage at panoramic video concert_3M.mp4 Reason, comprises the following steps:
Step A:Panoramic video input module carries out decoding process, two videos to a pair of panoramic video source files of input It is the longitude and latitude bitmap-format panoramic video of duration 10 seconds, frame per second 30fps, resolution ratio 4096*2048, damage video is regarded by lossless What frequency obtained after H.264 compressed encoding, the code check of lossless video is 50Mbps, and the code check for damaging video is 3Mbps, the two 300 pairs of panorama sketch are obtained after decoding, wide high respectively 4096 and 2048 pixels of image, wherein Fig. 2 (A) is lossless video 5th frame panorama sketch
Step B:Region of interesting extraction module, region of interesting extraction, the processing procedure are carried out to 300 lossless images Specifically completed by following steps:
B.1, the method that step calculates notable figure using color contrast, 300 low-levels of 300 images are calculated Region of interest domain matrixMatrix size is 2048 × 4096, wherein the result of the 5th frame is mapped to image space (by [0,1] 256) value of scope is multiplied by as shown in Fig. 2 (B), add another low-level region of interest domain matrix in additionFor one 2048 The all 1's matrix of × 4096 sizes;
B.2, step divides the image into super-pixel, then using two kinds of super-pixel block conspicuousness sort methods, calculates reference The middle level area-of-interest matrix of two field pictureWithIt is mapped to after image space as shown in Fig. 2 (C, D);
B.3, the method that step uses full convolutional neural networks, target semantic segmentation is carried out to reference frame image, will be split The mask arrived is as high-level region of interest domain matrix Mh, be mapped as bianry image such as Fig. 2 (E), matrix element be 1 belong to people, Target area based on animal and vehicle, element belong to background area for 0;
Step is not B.4 in the present embodiment using inter motion information, therefore the time domain level in the present embodiment is interested Matrix of areas MtFor null matrix;
B.5, step is longitude and latitude figure according to the mapping format of input video, weight matrix M corresponding to selectionp, be mapped to [0, 255] as shown in Fig. 2 (F), the value of each element of matrix is determined by latitude, as shown in formula (1);
B.6 B.5 B.1 the present embodiment be obtained every 6 region of interest domain matrixs of two field picture to step in step to stepTotally 1800 matrixes.
Step C:Multi-layer quality factor computing module, this example use PSNR quality evaluation algorithms, are exported based on step B Multi-layer area-of-interest weighting matrix set, calculate 300 two field pictures pair weighted difference set of matrices, export multi-layer matter Factor set is measured, the processing procedure is specifically completed by following steps:
C.1, the panoramic picture pair that step exports step A, and the multiple semi-cylindrical hills matrix of step B outputs, according to frame Order obtain 300 groupsEach group includes:One lossless panorama sketch, a width Damage panorama sketch, 6 region of interest domain matrixs;
C.2, step calculates weighted difference set of matrices between two image pixels, as shown in formula (2), I (i, j), I ' (i, j) and M (i, j) are respectively the value for damaging each element in image, lossless image and weighting matrix, and wherein image is if threeway Road then calculates weighted difference matrix respectively according to each passage, obtains
D (i, j)=(I (i, j)-I ' (i, j))2×M(i,j) (2)
C.3, step usesCalculate quality factor set As shown in formula (3), the present embodiment uses PSNR computational methods, if triple channel image, then takes triple channel quality factor Quality factor of the average value as damage image;
C.4, C.3 C.1 step can obtain 6 quality factors of every frame damage image by step to step,Output of the set as this module as totally 300.
Step D:Time domain processing module, 300 multi-layer quality factor set that input step C is obtained, this example according to The processing method of time domain average, the quality factor of correspondence position in each set is averaged, i.e., shown in formula (4), x, y difference Represent the area-of-interest method index in the level index and the level of quality factor, output damage video concert_ 3M.mp4 multi-layer quality factor set
Step E:Multi-layer quality factor Fusion Module, the multi-layer quality factor set that input step D is obtained, using BP Neutral net is merged, and obtains the final quality evaluation fraction Q (I, I ') of video fengjing_3M.mp4.
E.1, BP neural network that step uses is respectively connected to step 4 and obtained as shown in figure 3, network possesses 6 input nodes 6 quality factors arrived, 10 concealed nodes, 1 output node, the quality evaluation result of input [0,1] scope;
The parameter of the step E.2 Fusion Model is by the aphorama frequency not comprising test video concert_3M.mp4 Got according to training.
In this example, using the quality assessment value that 6 multi-layer quality factor amalgamation modes obtain and single-factor result Compare, it is more linearly related with subjective results.As shown in table 1, remove the quality factor of each level successively, obtain and subjectivity Spearman rank correlation coefficient SROCC is smaller than the SROCC of the quality factor using all levels.Value in form uses 12 sections of originals 288 sections of damage video BP network parameters of beginning video and corresponding content, then using in other 4 sections of original videos and correspondence The 96 sections of damage videos held are tested, and obtained SROCC is bigger to illustrate that the automatic evaluation method is better.
The multi-layer quality factor of table 1 contrasts with reducing a certain level quality factor
Above-described specific descriptions, the purpose, technical scheme and beneficial effect of invention are carried out further specifically It is bright, it should be understood that the specific embodiment that the foregoing is only the present invention, the protection model being not intended to limit the present invention Enclose, within the spirit and principles of the invention, any modification, equivalent substitution and improvements done etc., should be included in the present invention Protection domain within.

Claims (6)

1. a kind of panoramic video assessment system based on multi-layer quality factor, abbreviation the system, it is characterised in that:Based on multilayer The area-of-interest of level calculates multiple quality factors, and video quality is had a great influence with the important area tackled in panoramic video Problem;Then multi-layer quality factor being merged by Fusion Model, model parameter can be learnt to obtain by subjective data, with Tackle the subjective initiative of panoramic video user;
The system includes panoramic video input module, region of interesting extraction module, multi-layer quality factor computing module, time domain Processing module and multi-layer quality factor Fusion Module;
The annexation of each module of the system is as follows:
Panoramic video input module is connected with region of interesting extraction module, region of interesting extraction module and multi-layer quality because Sub- computing module is connected;Multi-layer quality factor computing module is connected with time domain processing module;Time domain processing module and multi-layer Quality factor Fusion Module is connected;
The function of each module of the system is as follows:
The function of panoramic video input module is that the video file of input is decoded to obtain panorama two field picture pair;Region of interest The function of domain extraction module is to extract the multi-layer region of interest domain matrix of panoramic picture;Multi-layer quality factor computing module Function is the quality factor according to area-of-interest matrix computations panoramic picture;The function of time domain processing module is according to panorama sketch The quality factor of picture calculates the quality factor of panoramic video;The function of multi-layer quality factor Fusion Module is by panoramic video Quality factor is merged to obtain the automatic assessment result of damage video.
A kind of 2. panoramic video appraisal procedure based on multi-layer quality factor, it is characterised in that:Comprise the following steps:
Step 1:Panoramic video input module carries out Video processing and decoding to a pair of panoramic video source files for inputting the system Processing, obtains panorama two field picture pair;
In step 1, panoramic video in a pair of panoramic video source files of input for one section lossless reference video S ' and one section and Reference video content identical damages video S, and what the damage damaged in video S included being artificially introduced obscures plus make an uproar and encode Based on processing caused by damage, also including in network transmission process as packet loss and error code for it is main the reason for caused by damage;
Wherein, lossless reference video is also referred to as reference video;
Obtained panorama two field picture is to comprising the reference frame image obtained by reference video, and the damage frame obtained by damage video Image;
Step 2:Region of interesting extraction module, the panorama exported using image procossing and computer vision algorithms make to step 1 Two field picture exports multi-layer area-of-interest set of matrices to row region of interesting extraction;
Step 2:Region of interesting extraction module, the panorama exported using image procossing and computer vision algorithms make to step 1 Two field picture exports multi-layer area-of-interest set of matrices to carrying out region of interesting extraction;
Step 3:Multi-layer quality factor computing module, using quality evaluation algorithm, the multi-layer sense based on step 2 output is emerging Interesting matrix of areas set, the weighted difference for the panorama two field picture pair that calculation procedure one exports, the multi-layer of output N framing images pair Quality factor set;
Step 4:Time domain processing module, the N group image multi-layer quality factor set that input step three obtains, at time domain Reason method, fusion turn into one group, output damage video S multi-layer quality factor set
<mrow> <mo>{</mo> <msubsup> <mi>F</mi> <mi>l</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>l</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>F</mi> <mi>l</mi> <msub> <mi>n</mi> <mi>l</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>m</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>m</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>F</mi> <mi>m</mi> <msub> <mi>n</mi> <mi>m</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>h</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>h</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>F</mi> <mi>h</mi> <msub> <mi>n</mi> <mi>h</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>t</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>F</mi> <mi>t</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>F</mi> <mi>t</mi> <msub> <mi>n</mi> <mi>t</mi> </msub> </msubsup> <mo>,</mo> <msub> <mi>F</mi> <mi>p</mi> </msub> <mo>}</mo> <mo>;</mo> </mrow>
Wherein, time-domain processing method is based on average and weighted average;
Step 5:Multi-layer quality factor Fusion Module, the multi-layer quality factor set that input step four obtains, using fusion Model Fusion is a quality evaluation result
Result Q is exported, that is, damages video S matter Measure evaluation result;
So far, by step 1 to step 5, this method is completed, i.e., a kind of panoramic video based on multi-layer quality factor is commented Estimate method.
A kind of 3. panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that: In step 1, the panoramic video in a pair of panoramic video source files of input is one section lossless reference video S ' and one section and reference Video content identical damages video S, include being artificially introduced fuzzy of the damage in damage video S plus based on making an uproar and encoding Processing caused by damage, also including in network transmission process as packet loss and error code for it is main the reason for caused by damage;
Wherein, lossless reference video is also referred to as reference video;
Whether a pair of panoramic video source files that step 1.1 judges to input the system have identical resolution ratio, frame per second and duration, And identical mapping format, including the mapping based on the mapping of longitude and latitude figure, hexahedron mapping and rectangular pyramid mapping, and according to sentencing Disconnected result carries out corresponding operating:
If a pair of panoramic video source files of 1.1A input the system have identical resolution ratio, frame per second and duration, and identical Mapping format, then skip to step 1.2;
A pair of panoramic video source files that 1.1B inputs the system do not have identical resolution ratio, frame per second and duration, and identical Mapping format, then panoramic video input module to damage video carry out using picture element interpolation, replicate two field picture, mapping transformation as Main Video processing so that damage video has identical resolution ratio, frame per second and duration, and identical mapping with reference video Form;
Step 1.2 uses the decoding tool based on ffmpeg, according to the volume of a pair of panoramic video source files of input the system Code form, carries out decoding process, each panoramic video is decoded as into multiple image, so as to obtain panorama two field picture pair, wherein, entirely The video frame number of scape video source file is N, and obtained panorama two field picture is to for N groups, including the N number of reference obtained by reference video Two field picture and the N number of damage two field picture obtained by damage video, the width of each panorama two field picture and high respectively W and H.
A kind of 4. panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that: In step 2, specifically:Reference frame image I ' carry out the area-of-interests of the panoramic frame image pair of step 1 output, i.e. ROI Extraction;Multi-layer area-of-interest set of matrices is low-level area-of-interest set of matricesMiddle level Area-of-interest set of matricesHigh-level area-of-interest set of matrices Time domain level area-of-interest set of matricesAnd mapping level area-of-interest matrix MpIn it is all Set of matrices, wherein M represent the two-dimensional matrix that size is H × W, i.e. image I ' a region of interest domain matrix, the element in M Span is [0,1], and M (i, j) value is that the numerical value that the i-th row of matrix jth arranges is bigger, represents to correspond to position in reference frame image I ' The pixel I ' (i, j) that puts is easier noticed by beholder it is bigger to the influence degree of video quality;M subscript l, m, H, t, p represent that the matrix is obtained by the area-of-interest exacting method of basic, normal, high, time domain and mapping level respectively, M's Superscript 1,2 ... n represents that the matrix is obtained by the n method of place level, wherein nl,nm,nh,ntTake and be more than or equal to 1 integer, it is emerging that expression is basic, normal, high, time domain level can obtain one or more senses using one or more kinds of methods Interesting matrix of areas, and map level only can obtain region of interest domain matrix from a kind of method;
The explanation for region of interest domain matrix number is to be directed to a reference frame image I ' above, for the N of step 1 output N number of reference frame image of group panoramic frame image pair, the region of interest domain matrix number of step 2 output is (nl+nm+nh+nt+ 1)×N;
Multi-layer region of interest domain matrix is produced by step 2.1 to step 2.5 respectively, is specially:
Step 2.1 calculates the low-level area-of-interest of reference frame image using pixel scale image processing method, exports low layer Level area-of-interest set of matrices
Wherein, pixel scale image processing method is based on color contrast and rim detection;
Step 2.2 calculates the middle level area-of-interest of reference frame image using super-pixel processing method, and level sense is emerging in output Interesting matrix of areas set
Wherein, super-pixel processing method is based on the sequence of super-pixel block conspicuousness;
Step 2.3 calculates the high-level area-of-interest of reference frame image using computer vision methods, and usually beholder holds The region based on people, animal and vehicle easily paid close attention to, export high-level area-of-interest set of matrices
Wherein, computer vision methods are based on Target Segmentation and semantic segmentation;
Step 2.4 calculates temporal levels area-of-interest using image processing method using adjacent two frames reference picture, is usually The moving object that beholder easily pays close attention to, output time-domain level area-of-interest set of matrices
Wherein, image processing method is based on light stream estimation and estimation;
For step 2.5 according to the mapping format of a pair of panoramic video source files of input the system, weight matrix corresponding to selection is defeated Go out weight matrix as mapping level area-of-interest matrix Mp;
It is for longitude and latitude figure mapping format, and weight ratio equator, the two poles of the earth weight of corresponding weight matrix is small, rectangular pyramid mapping format The bottom surface weight ratio conical surface weight of corresponding weight matrix is big;
Wherein, the mapping level area-of-interest matrix that step 2.5 exports is only relevant with video mapping format, with two field picture in itself Unrelated, once it is determined that the video mapping format of input, then the region of interest domain matrix of each frame is identical.
A kind of 5. panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that: In step 3, multi-layer quality factor set is low-level quality factor setMiddle level quality factor SetHigh-level quality factor setTime domain level quality factor setAnd mapping level quality factor set fpIn all numerical value set, wherein f represents one more than 0 Natural number, subscript is consistent with the upper subscript of the M in step 2 thereon, represents the quality factor by corresponding area-of-interest Matrix obtains the processing procedure and specifically completed by following steps:
The panorama two field picture pair that step 3.1 exports step 1, and the basic, normal, high of step 2 output, time domain and mapping sense are emerging Interesting matrix of areas, N groups are obtained according to the order of frame Each group includes:One lossless panorama sketch, a width damage panorama sketch and multi-layer is interested Matrix of areas set;
Step 3.2 calculates mass discrepancy matrix D that is lossless and damaging panorama sketch using pixel difference appraisal procedure, and D is H × W's Two-dimensional matrix, D (i, j) represent color/luminance difference that is lossless and damaging in panorama sketch (i, j) opening position pixel, can be used Euclidean distance method is calculated;
Each region of interest domain matrix M is multiplied by step 3.3 with difference matrix D corresponding elements, the difference matrix collection weighted Close
<mrow> <mo>{</mo> <msubsup> <mi>D</mi> <mi>l</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>l</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>D</mi> <mi>l</mi> <msub> <mi>n</mi> <mi>l</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>m</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>m</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>D</mi> <mi>m</mi> <msub> <mi>n</mi> <mi>m</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>h</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>h</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>D</mi> <mi>h</mi> <msub> <mi>n</mi> <mi>h</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>t</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>D</mi> <mi>t</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>D</mi> <mi>t</mi> <msub> <mi>n</mi> <mi>t</mi> </msub> </msubsup> <mo>,</mo> <msub> <mi>D</mi> <mi>p</mi> </msub> <mo>}</mo> <mo>;</mo> </mrow> 3
Step 3.4 uses traditional images objective quality assessment method by the difference matrix compound mapping of weighting for the more of damage image Level quality factor set
<mrow> <mo>{</mo> <msubsup> <mi>f</mi> <mi>l</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>l</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>f</mi> <mi>l</mi> <msub> <mi>n</mi> <mi>l</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>m</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>m</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>f</mi> <mi>m</mi> <msub> <mi>n</mi> <mi>m</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>h</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>h</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>f</mi> <mi>h</mi> <msub> <mi>n</mi> <mi>h</mi> </msub> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>t</mi> <mn>1</mn> </msubsup> <mo>,</mo> <msubsup> <mi>f</mi> <mi>t</mi> <mn>2</mn> </msubsup> <mo>,</mo> <mo>...</mo> <mo>,</mo> <msubsup> <mi>f</mi> <mi>t</mi> <msub> <mi>n</mi> <mi>t</mi> </msub> </msubsup> <mo>,</mo> <msub> <mi>f</mi> <mi>p</mi> </msub> <mo>}</mo> <mo>;</mo> </mrow>
Wherein, traditional images objective quality assessment method is based on MSE, PSNR and SSIM.
A kind of 6. panoramic video appraisal procedure based on multi-layer quality factor according to claim 2, it is characterised in that: In step 5, Fusion Model is based on linear regression, nonlinear regression and neural network model;
The parameter of the Fusion Model can be obtained by Experience Design, can also train to obtain by way of machine learning, wherein Method based on machine learning can mainly be completed by following steps:A BP neural network structure is designed first, is then utilized Training data trains to obtain the parameter of BP networks so that the result of these quality factors fusion is close to subjective scores;
Wherein, the quality score for some panoramic videos that the training data utilized obtains specifically by subjective experiment, and pass through The video quality factor that step 1 obtains to step 4.
CN201710683578.5A 2017-08-11 2017-08-11 A kind of panoramic video appraisal procedure and system based on multi-layer quality factor Active CN107483920B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710683578.5A CN107483920B (en) 2017-08-11 2017-08-11 A kind of panoramic video appraisal procedure and system based on multi-layer quality factor

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710683578.5A CN107483920B (en) 2017-08-11 2017-08-11 A kind of panoramic video appraisal procedure and system based on multi-layer quality factor

Publications (2)

Publication Number Publication Date
CN107483920A true CN107483920A (en) 2017-12-15
CN107483920B CN107483920B (en) 2018-12-21

Family

ID=60599247

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710683578.5A Active CN107483920B (en) 2017-08-11 2017-08-11 A kind of panoramic video appraisal procedure and system based on multi-layer quality factor

Country Status (1)

Country Link
CN (1) CN107483920B (en)

Cited By (19)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108271020A (en) * 2018-04-24 2018-07-10 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model
CN108460411A (en) * 2018-02-09 2018-08-28 北京市商汤科技开发有限公司 Example dividing method and device, electronic equipment, program and medium
CN108683909A (en) * 2018-07-12 2018-10-19 北京理工大学 VR audio and video overall customer experience method for evaluating quality
CN109377481A (en) * 2018-09-27 2019-02-22 上海联影医疗科技有限公司 Image quality evaluating method, device, computer equipment and storage medium
CN110139169A (en) * 2019-06-21 2019-08-16 上海摩象网络科技有限公司 Method for evaluating quality and its device, the video capture system of video flowing
CN110211090A (en) * 2019-04-24 2019-09-06 西安电子科技大学 A method of for assessment design composograph quality
CN110312170A (en) * 2019-07-12 2019-10-08 青岛一舍科技有限公司 A kind of video broadcasting method and device at adjustment visual angle
WO2020000522A1 (en) * 2018-06-27 2020-01-02 深圳看到科技有限公司 Method and device for picture quality assessment after dynamic panoramic video stream cropping
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN111127298A (en) * 2019-06-12 2020-05-08 上海大学 Panoramic image blind quality assessment method
CN111402860A (en) * 2020-03-16 2020-07-10 恒睿(重庆)人工智能技术研究院有限公司 Parameter management method, system, medium and device
CN111696081A (en) * 2020-05-18 2020-09-22 南京大学 Method for reasoning panoramic video quality according to visual field video quality
WO2020233536A1 (en) * 2019-05-17 2020-11-26 华为技术有限公司 Vr video quality evaluation method and device
US10950016B2 (en) 2018-06-11 2021-03-16 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for reconstructing cardiac images
CN112565208A (en) * 2020-11-24 2021-03-26 鹏城实验室 Multi-user panoramic video cooperative transmission method, system and storage medium
CN112634468A (en) * 2021-03-05 2021-04-09 南京魔鱼互动智能科技有限公司 Virtual scene and real scene video fusion algorithm based on SpPccs
WO2021164216A1 (en) * 2020-02-21 2021-08-26 华为技术有限公司 Video coding method and apparatus, and device and medium
CN114079777A (en) * 2020-08-20 2022-02-22 华为技术有限公司 Video processing method and device
US11270158B2 (en) 2018-02-09 2022-03-08 Beijing Sensetime Technology Development Co., Ltd. Instance segmentation methods and apparatuses, electronic devices, programs, and media

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146226A (en) * 2007-08-10 2008-03-19 中国传媒大学 A highly-clear video image quality evaluation method and device based on self-adapted ST area
CN104243973A (en) * 2014-08-28 2014-12-24 北京邮电大学 Video perceived quality non-reference objective evaluation method based on areas of interest

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101146226A (en) * 2007-08-10 2008-03-19 中国传媒大学 A highly-clear video image quality evaluation method and device based on self-adapted ST area
CN104243973A (en) * 2014-08-28 2014-12-24 北京邮电大学 Video perceived quality non-reference objective evaluation method based on areas of interest

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
BO ZHANG: "Subjective and objective quality assessment of panoramic videos in virtual reality environments", 《2017 IEEE INTERNATIONAL CONFERENCE ON MULTIMEDIA & EXPO WORKSHOPS (ICMEW)》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108460411A (en) * 2018-02-09 2018-08-28 北京市商汤科技开发有限公司 Example dividing method and device, electronic equipment, program and medium
US11270158B2 (en) 2018-02-09 2022-03-08 Beijing Sensetime Technology Development Co., Ltd. Instance segmentation methods and apparatuses, electronic devices, programs, and media
CN108460411B (en) * 2018-02-09 2021-05-04 北京市商汤科技开发有限公司 Instance division method and apparatus, electronic device, program, and medium
CN108271020B (en) * 2018-04-24 2019-08-09 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model
CN108271020A (en) * 2018-04-24 2018-07-10 福州大学 A kind of panoramic video quality evaluating method of view-based access control model attention model
US10950016B2 (en) 2018-06-11 2021-03-16 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for reconstructing cardiac images
US11915347B2 (en) 2018-06-11 2024-02-27 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for reconstructing cardiac images
US11688110B2 (en) 2018-06-11 2023-06-27 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for evaluating image quality
US11450038B2 (en) 2018-06-11 2022-09-20 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for reconstructing cardiac images
US11024062B2 (en) 2018-06-11 2021-06-01 Shanghai United Imaging Healthcare Co., Ltd. Systems and methods for evaluating image quality
WO2020000522A1 (en) * 2018-06-27 2020-01-02 深圳看到科技有限公司 Method and device for picture quality assessment after dynamic panoramic video stream cropping
US11438651B2 (en) 2018-06-27 2022-09-06 Kandao Technology Co., Ltd. Method and device for picture quality assessment after dynamic panoramic video stream cropping
CN108683909A (en) * 2018-07-12 2018-10-19 北京理工大学 VR audio and video overall customer experience method for evaluating quality
CN108683909B (en) * 2018-07-12 2020-07-07 北京理工大学 VR audio and video integral user experience quality evaluation method
CN109377481A (en) * 2018-09-27 2019-02-22 上海联影医疗科技有限公司 Image quality evaluating method, device, computer equipment and storage medium
CN109377481B (en) * 2018-09-27 2022-05-24 上海联影医疗科技股份有限公司 Image quality evaluation method, image quality evaluation device, computer equipment and storage medium
CN111093069A (en) * 2018-10-23 2020-05-01 大唐移动通信设备有限公司 Quality evaluation method and device for panoramic video stream
CN110211090B (en) * 2019-04-24 2021-06-29 西安电子科技大学 Method for evaluating quality of visual angle synthetic image
CN110211090A (en) * 2019-04-24 2019-09-06 西安电子科技大学 A method of for assessment design composograph quality
WO2020233536A1 (en) * 2019-05-17 2020-11-26 华为技术有限公司 Vr video quality evaluation method and device
CN111127298B (en) * 2019-06-12 2023-05-16 上海大学 Panoramic image blind quality assessment method
CN111127298A (en) * 2019-06-12 2020-05-08 上海大学 Panoramic image blind quality assessment method
CN110139169B (en) * 2019-06-21 2020-11-24 上海摩象网络科技有限公司 Video stream quality evaluation method and device and video shooting system
CN110139169A (en) * 2019-06-21 2019-08-16 上海摩象网络科技有限公司 Method for evaluating quality and its device, the video capture system of video flowing
CN110312170A (en) * 2019-07-12 2019-10-08 青岛一舍科技有限公司 A kind of video broadcasting method and device at adjustment visual angle
WO2021164216A1 (en) * 2020-02-21 2021-08-26 华为技术有限公司 Video coding method and apparatus, and device and medium
CN111402860B (en) * 2020-03-16 2021-11-02 恒睿(重庆)人工智能技术研究院有限公司 Parameter management method, system, medium and device
CN111402860A (en) * 2020-03-16 2020-07-10 恒睿(重庆)人工智能技术研究院有限公司 Parameter management method, system, medium and device
CN111696081A (en) * 2020-05-18 2020-09-22 南京大学 Method for reasoning panoramic video quality according to visual field video quality
CN111696081B (en) * 2020-05-18 2024-04-09 南京大学 Method for reasoning panoramic video quality from visual field video quality
CN114079777A (en) * 2020-08-20 2022-02-22 华为技术有限公司 Video processing method and device
CN112565208A (en) * 2020-11-24 2021-03-26 鹏城实验室 Multi-user panoramic video cooperative transmission method, system and storage medium
CN112634468A (en) * 2021-03-05 2021-04-09 南京魔鱼互动智能科技有限公司 Virtual scene and real scene video fusion algorithm based on SpPccs

Also Published As

Publication number Publication date
CN107483920B (en) 2018-12-21

Similar Documents

Publication Publication Date Title
CN107483920B (en) A kind of panoramic video appraisal procedure and system based on multi-layer quality factor
CN105208374B (en) A kind of non-reference picture assessment method for encoding quality based on deep learning
CN111292264B (en) Image high dynamic range reconstruction method based on deep learning
CN106156781B (en) Sort convolutional neural networks construction method and its image processing method and device
CN102333233B (en) Stereo image quality objective evaluation method based on visual perception
CN105160678A (en) Convolutional-neural-network-based reference-free three-dimensional image quality evaluation method
CN108573222A (en) The pedestrian image occlusion detection method for generating network is fought based on cycle
CN103338379B (en) Stereoscopic video objective quality evaluation method based on machine learning
CN110674925B (en) No-reference VR video quality evaluation method based on 3D convolutional neural network
CN104954778A (en) Objective stereo image quality assessment method based on perception feature set
CN110443759A (en) A kind of image defogging method based on deep learning
CN110225260A (en) A kind of three-dimensional high dynamic range imaging method based on generation confrontation network
CN113689382B (en) Tumor postoperative survival prediction method and system based on medical images and pathological images
CN109615576A (en) The single-frame image super-resolution reconstruction method of base study is returned based on cascade
CN111046213B (en) Knowledge base construction method based on image recognition
CN116033279B (en) Near infrared image colorization method, system and equipment for night monitoring camera
CN111127386B (en) Image quality evaluation method based on deep learning
CN116258756B (en) Self-supervision monocular depth estimation method and system
CN115423809B (en) Image quality evaluation method and device, readable storage medium and electronic equipment
CN114005020B (en) Designated moving target detection method based on M3-YOLOv5
CN116309171A (en) Method and device for enhancing monitoring image of power transmission line
Liu et al. An information retention and feature transmission network for infrared and visible image fusion
CN111127392B (en) No-reference image quality evaluation method based on countermeasure generation network
CN115705616A (en) True image style migration method based on structure consistency statistical mapping framework
Li et al. Context convolution dehazing network with channel attention

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant