CN102790844B - Video noise estimation method based on human eye visual characteristics - Google Patents

Video noise estimation method based on human eye visual characteristics Download PDF

Info

Publication number
CN102790844B
CN102790844B CN201210242301.6A CN201210242301A CN102790844B CN 102790844 B CN102790844 B CN 102790844B CN 201210242301 A CN201210242301 A CN 201210242301A CN 102790844 B CN102790844 B CN 102790844B
Authority
CN
China
Prior art keywords
noise
video
interframe
field picture
pixel value
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201210242301.6A
Other languages
Chinese (zh)
Other versions
CN102790844A (en
Inventor
尚凌辉
林国锡
王亚利
高勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Original Assignee
ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd filed Critical ZHEJIANG ICARE VISION TECHNOLOGY Co Ltd
Priority to CN201210242301.6A priority Critical patent/CN102790844B/en
Publication of CN102790844A publication Critical patent/CN102790844A/en
Application granted granted Critical
Publication of CN102790844B publication Critical patent/CN102790844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention relates to a video noise estimation method based on human eye visual characteristics. In the existing method, a large amount of false detection and missing detection is caused, particularly in coded and decoded noise videos. According to the method, firstly, a video bank is prepared, in addition, the manual labeling is carried out, MOS is obtained, the over-dark and over-bright regions of videos are determined through the analysis on the video brightness distribution, the video interframe object movement and the random variation, the interframe non-movement change quantity is extracted as the reference quantity of the noise evaluation, an appreciable model is introduced in the noise evaluation state for measuring the visual visible degree of the noise, the interframe structure change degree is used for measuring the visual effect of the noise, and meanwhile, the relationship model among the MOS, u and v is estimated. The video noise estimation method has the advantages that the problems of the exiting method can be effectively solved, a better effect is realized on the non-Gaussian model noise, even the noise in non-independent identical distribution, and meanwhile, high correlation on estimation results and human eye visual effects is realized.

Description

Video noise estimation method based on human-eye visual characteristic
Technical field
The invention belongs to video analysis field and technical field of video monitoring, relate to a kind of video noise estimation method based on human-eye visual characteristic.
Background technology
Along with the development of computer technology, network communications technology, Video Supervision Technique and consumer electronics and universal, video has obtained application more and more widely.But the obtaining of video, as unclean in ageing equipment, low-light (level), power supply, the coded quantization of the impact of encoding, may be subject in the process such as transmission various factors, electromagnetic interference etc., be subject in various degree, multi-form noise pollution, video quality is declined, not only affect the visual effect of video, and follow-up more senior Video processing has also been caused to impact to a certain degree.The reliable noise estimation method of visible one is to timely assessment video noise degree, and real-time monitor video obtains, the performance of transmission equipment plays very significant effect.
It is all a more popular research topic that noise is estimated all the time, noise was estimated to be applied to again in video quality diagnosis in the last few years, be used for assessing the noise level of video, the video that is subject to noise severe contamination is reported to the police, notify maintenance personal to keep in repair watch-dog.At present the flow process of noise estimation method is similar, algorithm flow roughly as shown in Figure 1:
The mentality of designing of these class methods is succinct, but have a large amount of flase drop and undetected in practical application, especially to the noise video through encoding and decoding, main cause be summarised as following some:
1, noise model hypothesis is unreasonable: no matter be academic research or engineering application, generally all suppose that the model of noise is white Gaussian noise, but generally do not exist in actual applications so desirable at noise model, even in the many times not necessarily independently same distribution of distribution of noise, to the noise through encoding and decoding especially like this, so this hypothesis has caused occurring in practical application a large amount of flase drop and undetected.
2, flat site calculates unreasonable: this class algorithm is generally flat site by zone marker less variance, thinks the variance of variance approximate noise in these regions, supposes like this to seem more reasonable.Will find that this hypothesis exists two larger leaks but scrutinize: the not necessarily flat site of region that 1) variance is little, after adding noise, texture region not only can make Local Deviation become large, and also may make Local Deviation diminish; 2) can be ended in the intensity of these noise regions for the video that had dark areas and overexposure region, cause the variance ratio noise variance in these regions little.Both of these case causes the Lower result of noise Estimation Algorithm.
3, noise intensity is evaluated unreasonable: in actual applications, especially in the time that the receiving terminal of video is human eye (ordinary circumstance is all like this), represent that with noise variance noise intensity does not often meet the visual experience of human eye, because the visual effect difference of mutually homoscedastic noise in different video.
A kind of sane video noise estimation method that meets human-eye visual characteristic is all very significant in video analysis field and monitoring field as can be seen here.
Summary of the invention
The weak point existing in detecting for current video noise, the present invention proposes a kind of based on visual characteristics of human eyes noise estimation method, the method is mainly: prepare video library pedestrian's work mark of going forward side by side and obtain MOS, by analyzing the Luminance Distribution of video, object of which movement between frame of video and change at random are determined the excessively dark of video, cross bright area, extract the non-movement change amount of interframe as the reference quantity of noise rating, introduce the vision visible level u that just can sensor model (JND) weighs noise in the noise rating stage, weigh the visual effect v of noise by the structural change degree of interframe, estimate MOS and u simultaneously, relational model between v.The method can effectively solve the problem that current existing method exists, and to the noise of non-Gauss model, or even dependent has reasonable effect with the noise distributing, and the visual effect of evaluation result and human eye has very strong correlation simultaneously.
The technical scheme that technical solution problem of the present invention is taked:
Step 1) noise intensity manually marks: prepare a video library, the video that this video library has comprised the different noise levels of different scenes, each video carries out the subjective marking of noise level by multiple different people to it, score value between 0 to 100, the human eye subjective scores of the noise level that the noise level mean value MOS of each video is this video.
Step 2) denoising: for can more accurate setting movement region, reduce the interference of noise to location, first each two field picture in video is done to a denoising.
Step 3) location, moving region: interframe pixel changes can be divided into three kinds of situations: interference, light change or object of which movement; Step 2) video is done to denoising, so changing, the interframe pixel here only need to distinguish object of which movement or light variation; In the situation that camera is fixing, in the time that light changes, the interframe pixel of regional area changes the rule existing as shown in formula (1); Suppose that local light, according to consistency, carries out pixel normalization to front and back two field picture, as shown in formula (2,3), by image f ' after normalization n+1and f ndo frame poor, piecemeal calculates the variance of frame difference image, and variance is greater than to threshold value T δpiece be designated as moving region;
f n ( x i , y i ) f n + 1 ( x i , y i ) = f n ( x j , y j ) f n + 1 ( x j , y j ) - - - ( 1 )
δ = Σ ( x , y ) ∈ A f n ( x , y ) Σ ( x , y ) ∈ A f n + 1 ( x , y ) - - - ( 2 )
f′ n+1(x,y)=f n(x,y)×δ (3)
Wherein f n(x i, y i) represent coordinate (x in the n two field picture of video i, y i) pixel value located, f n+1(x i, y i) represent coordinate (x in the n+1 two field picture of video i, y i) pixel value located, f n(x j, y j) represent coordinate (x in the n two field picture of video j, y j) pixel value located, f n+1(x j, y j) represent coordinate (x in the n+1 two field picture of video j, y j) pixel value located; represent n two field picture regional area A pixel value sum, represent n+1 two field picture regional area A pixel value sum, δ represents illumination variation rate, f ' n+1(x, y) represents that local light is according to the n+1 two field picture of estimating under consistency to obtain.
Step 4) excessively bright, cross the location of dark areas: the dynamic range of camera is all more limited, in the time existing low-light (level) and high illumination region to exceed the dynamic adjustments scope of camera in photographed scene simultaneously, to there is cut off phenomenon in excessively bright, mistake dark areas in image, also be there is to cut off phenomenon in noise, so noise intensity excessively bright, that cross dark areas is on the low side compared with other region simultaneously.
In order to make noise estimated result robust more, in the time that noise is estimated, should cross and filter interference bright, that cross dark areas, described mistake bright area is the region of pixel value between 225-255, described mistake dark areas is the region of pixel value between 0-30.
Step 5) noise mask: consider object of which movement and brightness on noise estimate impact, to non-moving region and non-excessively bright, cross dark areas carry out noise intensity estimation, these two kinds of regions are designated as to noise masked areas.
Step 6) just can sensor model JND: the basic thought of this model is under a certain background luminance, when pixel value changes within the specific limits, human eye can not perception, the perception changing through overtesting pixel exists critical point to be called as just can perception point, just can be represented by formula (4) by perception point under different background, wherein f arepresent the background luminance in region;
JND=4.0+0.12301×f A (4)
Step 7) the non-motion change of interframe: setting the frame of noise masked areas poor is the non-motion change of interframe.
Step 8) visible level of noise: the non-motion change of interframe and JND image are compared, and the visible level that obtains noise is designated as u, and the non-motion change here has also comprised light variation, will in next step calculates, filter.
Step 9) interframe structural change: the front and back frame of the original video to not denoising carry out step 2) operation, noise variance is designated as to the visual effect of noise, this visual effect represents with v, variance shows that more greatly the visual effect of noise is stronger, here can well filter out light and change, very little because light changes the interframe variation variance causing.
Step 10) matching of noise intensity feature and noise level mean value MOS: step 8) with step 9) obtained the noise intensity characteristic value relevant with human visual experience, respectively u and v, but between u, v and noise level mean value MOS and non-linear relation, adopt formula (5) to carry out nonlinear fitting
MOS=α+β×u χ×v γ (5)
Wherein α, β, χ, γ all represent the parameter of nonlinear model.
Step 11) noise intensity calculating: treat and estimate that video is by step 2) to step 9) computation of characteristic values u, v, substitution formula (5) can obtain meeting the noise intensity value of visual characteristics of human eyes.
Beneficial effect of the present invention: the present invention has introduced the visual characteristic of luminance video analysis, object of which movement analysis and human eye in video noise estimation method, can effectively estimate the noise level in video, and the degree value of noise and the subjective assessment of human eye closely similar.All play very significant effect for the quality evaluation of video and the failure diagnosis of watch-dog.
Brief description of the drawings
Fig. 1 is the noise Estimation Algorithm flow chart of current main flow;
Fig. 2 is the region mask that calculating noise is estimated;
Fig. 3 is noise characteristic value and human eye vision matching;
Fig. 4 is that noise intensity is calculated.
Embodiment
Below in conjunction with accompanying drawing, the invention will be further described.
The present invention has taken into full account overexposure in the noise extraction stage, has crossed the impact that dark areas is estimated noise, has considered that the interference of object of which movement makes estimated result have reasonable robustness to different scenes simultaneously; Visibility and the visual effect of having introduced human-eye visual characteristic and evaluate noise in the noise rating stage, make the visual impression of estimated result and human eye to receive the higher degree of correlation.
The present invention mainly comprises following content:
1, the artificial mark of video noise intensity: in order to set up the vision mode of noise estimation, need to obtain human visual experience's intensity of sample video noise, manually mark so need to carry out noise intensity to sample video, concrete steps are as step 1) as described in.
2, determine noise mask: in actual noise is estimated, be not that positive effect is estimated to have to noise in all regions, so need to determine the image-region that participates in calculating in noise is estimated, it is noise mask, as shown in Figure 2, concrete steps are as step 2 for algorithm flow), 3), 4), 5) as described in.
3, characteristic value calculate: extract the noise intensity characteristic value relevant with human eye vision, as shown in Figure 3, concrete steps are as step 6 for algorithm flow), 7), 8), 9) as described in.
4, the matching of characteristic value and human eye vision effect: in order to improve the degree of correlation of noise intensity estimated value and human eye vision, need to carry out matching to noise characteristic value and human eye vision effect, concrete steps are as step 10) as described in.
5, noise intensity is estimated: extract the noise characteristic value of video to be evaluated, utilize the vision mode calculating noise intensity level that matching is good, as shown in Figure 4, concrete steps are as described in step 11 for algorithm flow.
Concrete implementation step:
1) noise intensity manually marks: prepare a video library, the video that this video library has comprised the different noise levels of different scenes, each video carries out the subjective marking of noise level by multiple different people to it, score value between 0 to 100, the human eye subjective scores of the noise level that the noise level mean value MOS of each video is this video.The present invention has adopted a video library that comprises 2000 videos in implementation process, by 20 people, each video is given a mark simultaneously.
2) denoising: for can more accurate setting movement region, reduce the interference of noise to location, first each two field picture in video is done to a denoising.Denoising adopts 3 × 3 mean filter.
3) location, moving region: interframe pixel changes can be divided into three kinds of situations: interference, light change or object of which movement.Step 2) video is done to denoising, so changing, the interframe here only need to distinguish object of which movement or light variation.In the situation that camera is fixing, in the time that light changes, the interframe pixel of regional area changes the rule existing as shown in formula (1).Suppose that local light, according to consistency, carries out pixel normalization to front and back two field picture, as shown in formula (2,3), by image f ' after normalization n+1and f ndo frame poor, piecemeal calculates the variance of frame difference image, and variance is greater than to threshold value T δpiece be designated as moving region.
f n ( x i , y i ) f n + 1 ( x i , y i ) = f n ( x j , y j ) f n + 1 ( x j , y j ) - - - ( 1 )
δ = Σ ( x , y ) ∈ A f n ( x , y ) Σ ( x , y ) ∈ A f n + 1 ( x , y ) - - - ( 2 )
f′ n+1(x,y)=f n(x,y)×δ (3)
Wherein f n(x i, y i) represent coordinate (x in the n two field picture of video i, y i) pixel value located, f n+1(x i, y i) represent coordinate (x in the n+1 two field picture of video i, y i) pixel value located, f n(x j, y j) represent coordinate (x in the n two field picture of video j, y j) pixel value located, f n+1(x j, y j) represent coordinate (x in the n+1 two field picture of video j, y j) pixel value located; represent n two field picture regional area A pixel value sum, represent n+1 two field picture regional area A pixel value sum, δ represents illumination variation rate, f ' n+1(x, y) represents that local light is according to the n+1 two field picture of estimating under consistency to obtain.
4) excessively bright, cross the location of dark areas: the dynamic range of camera is all more limited, in the time existing low-light (level) and high illumination region to exceed the dynamic adjustments scope of camera in photographed scene simultaneously, to there is cut off phenomenon in excessively bright, mistake dark areas in image, also be there is to cut off phenomenon in noise, so noise intensity excessively bright, that cross dark areas is on the low side compared with other region simultaneously.
In order to make noise estimated result robust more, in the time that noise is estimated, should cross and filter interference bright, that cross dark areas, described mistake bright area is the region of pixel value between 225-255, described mistake dark areas is the region of pixel value between 0-30;
5) noise mask: consider object of which movement and brightness on noise estimate impact, to non-moving region and non-excessively bright, cross dark areas carry out noise intensity estimation, these two kinds of regions are designated as to noise mask.
6) just can sensor model JND: the basic thought of this model is under a certain background luminance, when pixel value changes within the specific limits, human eye can not perception, the perception changing through overtesting pixel exists critical point to be called as just can perception point, just can be represented by formula (4) by perception point under different background, wherein f arepresent the background luminance in region.
JND=4.0+0.12301×f A (4)
7) the non-motion change of interframe: setting the frame of noise masked areas poor is the non-motion change of interframe.
8) visible level of noise: the non-motion change of interframe and JND image are compared, and the visible level that obtains noise is designated as u, the non-motion change here has also comprised light variation, will in next step calculates, filter.
9) interframe structural change: the front and back frame of the original video to not denoising carry out step 2) operation, noise variance is designated as to the visual effect of noise, this visual effect represents with v, variance shows that more greatly the visual effect of noise is stronger, here can well filter out light and change, very little because light changes the interframe variation variance causing.
10) matching of noise intensity feature and noise level mean value MOS: step 8) with step 9) obtained the noise intensity characteristic value relevant with human visual experience, respectively u and v, but between u, v and noise level mean value MOS and non-linear relation, adopt formula (5) to carry out nonlinear fitting
MOS=α+β×u χ×v γ (5)
Wherein α, β, χ, γ all represent the parameter of nonlinear model.
11) noise intensity is calculated: treat and estimate that video is by step 2) to step 9) computation of characteristic values u, v, substitution formula (5) can obtain meeting the noise intensity value of visual characteristics of human eyes.
The above; be only preferred embodiment of the present invention, be not intended to limit protection scope of the present invention, should be with understanding; the present invention is not limited to implementation as described herein, and the object that these implementations are described is to help those of skill in the art to put into practice the present invention.

Claims (2)

1. the video noise estimation method based on human-eye visual characteristic, is characterized in that the method comprises the following steps:
Step 1) noise intensity manually marks: prepare a video library, the video that this video library has comprised the different noise levels of different scenes, each video carries out the subjective marking of noise level by multiple different people to it, score value between 0 to 100, the human eye subjective scores of the noise level that the noise level mean value MOS of each video is this video;
Step 2) denoising: for can more accurate setting movement region, reduce the interference of noise to location, first each two field picture in video is done to a denoising;
Step 3) location, moving region: interframe pixel changes can be divided into three kinds of situations: interference, light change or object of which movement; Step 2) video is done to denoising, so changing, the interframe pixel here only need to distinguish object of which movement or light variation; In the situation that camera is fixing, in the time that light changes, the interframe pixel of regional area changes the rule existing as shown in formula (1); Suppose that local light, according to consistency, carries out pixel normalization to front and back two field picture, as shown in formula (2,3), by image f ' after normalization n+1and f ndo frame poor, piecemeal calculates the variance of frame difference image, and variance is greater than to threshold value T δpiece be designated as moving region;
f n ( x i , y i ) f n + 1 ( x i , y i ) = f n ( x j , y j ) f n + 1 ( x j , y j ) - - - ( 1 )
δ = Σ ( x , y ) ∈ A f n ( x , y ) Σ ( x , y ) ∈ A f n + 1 ( x , y ) - - - ( 2 )
f′ n+1(x,y)=f n(x,y)×δ (3)
Wherein f n(x i, y i) represent coordinate (x in the n two field picture of video i, y i) pixel value located, f n+1(x i, y i) represent coordinate (x in the n+1 two field picture of video i, y i) pixel value located, f n(x j, y j) represent coordinate (x in the n two field picture of video j, y j) pixel value located, f n+1(x j, y j) represent coordinate (x in the n+1 two field picture of video j, y j) pixel value located; represent n two field picture regional area A pixel value sum, represent n+1 two field picture regional area A pixel value sum, δ represents illumination variation rate, f ' n+1(x, y) represents that local light is according to the n+1 two field picture of estimating under consistency to obtain;
Step 4) excessively bright, cross the location of dark areas: the dynamic range of camera is all more limited, in the time existing low-light (level) and high illumination region to exceed the dynamic adjustments scope of camera in photographed scene simultaneously, to there is cut off phenomenon in excessively bright, mistake dark areas in image, also be there is to cut off phenomenon in noise, so noise intensity excessively bright, that cross dark areas is on the low side compared with other region simultaneously;
In order to make noise estimated result robust more, in the time that noise is estimated, should cross and filter interference bright, that cross dark areas, described mistake bright area is the region of pixel value between 225-255, described mistake dark areas is the region of pixel value between 0-30;
Step 5) noise mask: consider object of which movement and brightness on noise estimate impact, to non-moving region and non-excessively bright, cross dark areas carry out noise intensity estimation, these two kinds of regions are designated as to noise masked areas;
Step 6) just can sensor model JND: the basic thought of this model is under a certain background luminance, when pixel value changes within the specific limits, human eye can not perception, the perception changing through overtesting pixel exists critical point to be called as just can perception point, just can be represented by formula (4) by perception point under different background, wherein f arepresent the background luminance in region;
JND=4.0+0.12301×f A (4)
Step 7) the non-motion change of interframe: setting the frame of noise masked areas poor is the non-motion change of interframe;
Step 8) visible level of noise: the non-motion change of interframe and JND image are compared, and the visible level that obtains noise is designated as u, and the non-motion change here has also comprised light variation, will in next step calculates, filter;
Step 9) interframe structural change: the front and back frame of the original video to not denoising carry out step 2) operation, noise variance is designated as to the visual effect of noise, this visual effect represents with v, variance shows that more greatly the visual effect of noise is stronger, here can well filter out light and change, very little because light changes the interframe variation variance causing;
Step 10) matching of noise intensity feature and noise level mean value MOS: step 8) with step 9) obtained the noise intensity characteristic value relevant with human visual experience, respectively u and v, but between u, v and noise level mean value MOS and non-linear relation, adopt formula (5) to carry out nonlinear fitting
MOS=α+β×u χ×v γ (5)
Wherein α, β, χ, γ all represent the parameter of nonlinear model;
Step 11) noise intensity calculating: treat and estimate that video is by step 2) to step 9) computation of characteristic values u, v, substitution formula (5) can obtain meeting the noise intensity value of visual characteristics of human eyes.
2. according to the video noise estimation method based on human-eye visual characteristic shown in claim 1, it is characterized in that: step 2) described in denoising adopt 3 × 3 mean filter.
CN201210242301.6A 2012-07-13 2012-07-13 Video noise estimation method based on human eye visual characteristics Active CN102790844B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210242301.6A CN102790844B (en) 2012-07-13 2012-07-13 Video noise estimation method based on human eye visual characteristics

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210242301.6A CN102790844B (en) 2012-07-13 2012-07-13 Video noise estimation method based on human eye visual characteristics

Publications (2)

Publication Number Publication Date
CN102790844A CN102790844A (en) 2012-11-21
CN102790844B true CN102790844B (en) 2014-08-13

Family

ID=47156140

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210242301.6A Active CN102790844B (en) 2012-07-13 2012-07-13 Video noise estimation method based on human eye visual characteristics

Country Status (1)

Country Link
CN (1) CN102790844B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9761006B2 (en) * 2013-06-28 2017-09-12 Koninklijke Philips N.V. Methods of utilizing image noise information
CN103955921B (en) * 2014-04-17 2017-04-12 杭州电子科技大学 Image noise estimation method based on human eye visual features and partitioning analysis method
CN108805851B (en) * 2017-04-26 2021-03-02 杭州海康威视数字技术股份有限公司 Method and device for evaluating image time domain noise
CN114155161B (en) * 2021-11-01 2023-05-09 富瀚微电子(成都)有限公司 Image denoising method, device, electronic equipment and storage medium

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102231844A (en) * 2011-07-21 2011-11-02 西安电子科技大学 Video image fusion performance evaluation method based on structure similarity and human vision

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102075786A (en) * 2011-01-19 2011-05-25 宁波大学 Method for objectively evaluating image quality
CN102142145A (en) * 2011-03-22 2011-08-03 宁波大学 Image quality objective evaluation method based on human eye visual characteristics
CN102231844A (en) * 2011-07-21 2011-11-02 西安电子科技大学 Video image fusion performance evaluation method based on structure similarity and human vision

Also Published As

Publication number Publication date
CN102790844A (en) 2012-11-21

Similar Documents

Publication Publication Date Title
CN101464944B (en) Crowd density analysis method based on statistical characteristics
CN106600888B (en) Automatic forest fire detection method and system
CN102307274B (en) Motion detection method based on edge detection and frame difference
CN102790844B (en) Video noise estimation method based on human eye visual characteristics
CN101996407B (en) Colour calibration method for multiple cameras
CN101324927B (en) Method and apparatus for detecting shadows
CN102421008A (en) Intelligent video quality detecting system
CN105761261A (en) Method for detecting artificial malicious damage to camera
CN105678803A (en) Video monitoring target detection method based on W4 algorithm and frame difference
CN103985091A (en) Single image defogging method based on luminance dark priori method and bilateral filtering
CN104811586A (en) Scene change video intelligent analyzing method, device, network camera and monitoring system
CN103903273A (en) PM2.5 grade fast-evaluating system based on mobile phone terminal
CN101976444A (en) Pixel type based objective assessment method of image quality by utilizing structural similarity
CN104182983B (en) Highway monitoring video definition detection method based on corner features
CN103297801A (en) No-reference video quality evaluation method aiming at video conference
CN104574424B (en) Based on the nothing reference image blur evaluation method of multiresolution DCT edge gradient statistics
CN104463253A (en) Fire fighting access safety detection method based on self-adaptation background study
CN102722888A (en) Stereoscopic image objective quality evaluation method based on physiological and psychological stereoscopic vision
CN102006462B (en) Rapid monitoring video enhancement method by using motion information and implementation device thereof
CN101472177A (en) Detection method for block effect
CN104185022B (en) The full reference video quality appraisal procedure that view-based access control model information distortion is decomposed
CN105469413A (en) Normalized ringing weighting based no-reference comprehensive quality assessment method for fuzzy restored image
CN102685547B (en) Low-bit-rate video quality detection method based on blocking effects and noises
CN102867308B (en) Method for detecting change of video image output by computer
CN101877135A (en) Moving target detecting method based on background reconstruction

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C53 Correction of patent for invention or patent application
CB02 Change of applicant information

Address after: Hangzhou City, Zhejiang province Yuhang District 311121 West Street Wuchang No. 998 building 7 East

Applicant after: Zhejiang iCare Vision Technology Co., Ltd.

Address before: 310013, Zhejiang, Xihu District, Hangzhou, Tian Shan Road, No. 398, Kun building, four floor, South Block

Applicant before: Zhejiang iCare Vision Technology Co., Ltd.

C14 Grant of patent or utility model
C53 Correction of patent for invention or patent application
CB02 Change of applicant information

Address after: Hangzhou City, Zhejiang province Yuhang District 311121 West Street Wuchang No. 998 building 7 East

Applicant after: ZHEJIANG ICARE VISION TECHNOLOGY CO., LTD.

Address before: Hangzhou City, Zhejiang province Yuhang District 311121 West Street Wuchang No. 998 building 7 East

Applicant before: Zhejiang iCare Vision Technology Co., Ltd.

COR Change of bibliographic data

Free format text: CORRECT: APPLICANT; FROM: HANGZHOU ICARE VISION TECHNOLOGY CO., LTD. TO: ZHEJIANG ICARE VISION TECHNOLOGY CO., LTD.

GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Video noise estimation method based on human eye visual characteristics

Effective date of registration: 20190820

Granted publication date: 20140813

Pledgee: Hangzhou Yuhang Financial Holding Co., Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY CO., LTD.

Registration number: Y2019330000016

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Date of cancellation: 20200917

Granted publication date: 20140813

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2019330000016

PC01 Cancellation of the registration of the contract for pledge of patent right
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: Video noise estimation method based on human visual characteristics

Effective date of registration: 20200921

Granted publication date: 20140813

Pledgee: Hangzhou Yuhang Financial Holding Co.,Ltd.

Pledgor: ZHEJIANG ICARE VISION TECHNOLOGY Co.,Ltd.

Registration number: Y2020330000737

PE01 Entry into force of the registration of the contract for pledge of patent right