CN105741322A - Region segmentation method of field of view on the basis of video feature layer fusion - Google Patents

Region segmentation method of field of view on the basis of video feature layer fusion Download PDF

Info

Publication number
CN105741322A
CN105741322A CN201610072608.4A CN201610072608A CN105741322A CN 105741322 A CN105741322 A CN 105741322A CN 201610072608 A CN201610072608 A CN 201610072608A CN 105741322 A CN105741322 A CN 105741322A
Authority
CN
China
Prior art keywords
pixel
video
color
value
frame
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201610072608.4A
Other languages
Chinese (zh)
Other versions
CN105741322B (en
Inventor
张睿
童玉娟
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Quzhou University
Original Assignee
Quzhou University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Quzhou University filed Critical Quzhou University
Priority to CN201610072608.4A priority Critical patent/CN105741322B/en
Publication of CN105741322A publication Critical patent/CN105741322A/en
Application granted granted Critical
Publication of CN105741322B publication Critical patent/CN105741322B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a region segmentation method of a field of view on the basis of video feature layer fusion. The region segmentation method comprises the following steps: calculating the color feature of each pixel in a video; calculating the dynamics feature of each pixel in the video; calculating the texture feature of each pixel in the video; and carrying out the feature layer fusion on the dynamics feature, the color feature and the texture feature of each pixel in the video, and carrying out region segmentation on the field of view in the video according to the features obtained by fusion. The region segmentation method comprehensively utilizes the dynamics feature of the video pixel on an aspect of time dimension and the color feature and the texture feature of the video pixel on the aspect of space dimension to improve the effectiveness and the correctness of the region segmentation of the field of view.

Description

A kind of field of view dividing method merged based on video features layer
Technical field
The present invention relates to video analysis processing technology field, particularly relate to a kind of field of view dividing method merged based on video features layer.
Background technology
Constantly ripe along with video technique declines with cost, and video analysis and treatment technology have been widely used for the numerous areas of scientific research, production and social life.The visual field presented in video is carried out the valuable information that region segmentation contributes to extracting in video, is a kind of important video analysis and treatment technology.
At present, what mainly use for reference for the region segmentation method of visual field in video is image region segmentation technology.Conventional image region segmentation technology has the method based on color characteristic, method based on textural characteristics and the method etc. based on shape facility.Obviously, image region segmentation technology is grafted directly on object video, have ignored the time variation of abundant dynamic feature and the video content comprised in video, necessarily cause the deficiency of field of view segmentation effectiveness and correctness.
Summary of the invention
It is an object of the invention to the time variation overcoming existing field of view dividing method to have ignored abundant dynamic feature and the video content comprised in video, cause field of view segmentation effectiveness and the technical problem of correctness deficiency, provide a kind of field of view dividing method merged based on video features layer, it fully utilizes the dynamic feature on video image vegetarian refreshments time dimension and the color characteristic on Spatial Dimension and textural characteristics, improves effectiveness and the correctness of field of view segmentation.
In order to solve the problems referred to above, the present invention is achieved by the following technical solutions:
A kind of field of view dividing method merged based on video features layer of the present invention, comprises the following steps:
S1: calculate the color characteristic of each pixel in video;
S2: calculate the dynamic feature of each pixel in video;
S3: calculate the textural characteristics of each pixel in video;
S4: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, carries out region segmentation according to merging gained feature to the visual field in video.
In the technical program, adopt the field of view dividing method merged based on video features layer, fully utilize the dynamic feature on video image vegetarian refreshments time dimension and the color characteristic on Spatial Dimension and textural characteristics, make use of the Space Time Joint Distribution feature of visual information in video, cannot utilizing the deficiency of the temporal dynamic property feature of visual information when overcoming video capture image region segmentation method, this method is applicable to the color video of the various resolution that visual field is fixing and carries out field of view segmentation.
As preferably, described step S1 comprises the following steps:
S11: each pixel of video is generated a color feature vector based on RGB color, and this RGB color characteristic vector is as follows:
f1(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t)
Wherein, R (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on red color passage, G (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on green color channel, B (i, j) |tRepresent video coordinate (i, j) pixel at place pixel value on blue color channels when t frame;
S12: the RGB color of video is converted to hsv color space;
S13: each pixel of video is generated a color feature vector based on hsv color space, and this hsv color characteristic vector is as follows:
f2(i, j) |t=(H (i, j) |t, S (i, j) |t, V (i, j) |t)
Wherein, H (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on tone passage, S (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on saturation passage, V (i, j) |tRepresent video when t frame at coordinate (i, j) pixel at place pixel value in luminance channel;
S14: connected with the color feature vector based on hsv color space by the color feature vector based on RGB color, generates a color feature vector based on double; two color spaces, and this color feature vector is expressed as follows:
f3(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t, H (i, j) |t, S (i, j) |t, V (i, j) |t)。
As preferably, described step S2 comprises the following steps:
S21: video is converted into gray processing video;
S22: each pixel in gray processing video is built background model;
S23: the number of times of the significance gray-value variation occurred on each pixel in statistics gray processing video, significance gray-value variation is defined as: the gray-value variation amplitude on a pixel exceeds the gray value normal variation scope on this pixel set by background model;
S24: calculating the dynamic of each pixel in gray processing video, the computing formula of the dynamic of pixel is:
D ( i , j ) | t = ψ ( i , j ) | t t
Wherein, Ψ (i, j) |tRepresent gray processing video from start frame to t frame time internal coordinate (i, j) number of times of the significance gray-value variation occurred on the pixel at place;D (i, j) |t(i, j) there is the frequency of significance gray-value variation in the pixel at place in gray processing video is from start frame to t frame time, i.e. coordinate (i, j) dynamic of place's pixel to represent coordinate.
As preferably, described step S3 comprises the following steps:
S31: video is converted into gray processing video, uses original LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 1st textural characteristics value W of this pixel1(i, j) |t
S32: use circular LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 2nd textural characteristics value W of this pixel2(i, j) |t
S33: (i, j) the 1st textural characteristics value of the pixel at place and the 2nd textural characteristics value are combined as the texture feature vector of this pixel, it may be assumed that f at coordinate when t frame by gray processing video4(i, j) |t=(W1(i, j) |t, W2(i, j) |t)。
As preferably, described step S4 comprises the following steps:
S41: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, obtains fusion feature vector;
S42: when adopting clustering method to t frame, in video, the fusion feature vector on all pixels carries out automatic cluster analysis;
S43: the pixel corresponding to all fusion feature vectors being classified as a class by cluster analysis is divided into same region, completes the region segmentation of visual field in video.
The substantial effect of the present invention is: fully utilize the video multiple visual signature on time and Spatial Dimension, including the pixel dynamic feature on time dimension, and color on Spatial Dimension and textural characteristics, owing to adding pixel this key message of dynamic feature not having in still image, adopt image region segmentation method that video carries out effectiveness produced by region segmentation and the problem of correctness deficiency thus overcoming.
Accompanying drawing explanation
Fig. 1 is the workflow diagram of the present invention;
Fig. 2 is the schematic diagram of original LBP operator in step S31;
Fig. 3 is the schematic diagram of circular LBP operator in step S32.
Detailed description of the invention
By the examples below, and in conjunction with accompanying drawing, technical scheme is described in further detail.
Embodiment: a kind of field of view dividing method merged based on video features layer of the present embodiment, as it is shown in figure 1, comprise the following steps:
S1: calculate the color characteristic of each pixel in video;
S2: calculate the dynamic feature of each pixel in video;
S3: calculate the textural characteristics of each pixel in video;
S4: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, carries out region segmentation according to merging gained feature to the visual field in video.
Step S1 comprises the following steps:
S11: each pixel of video is generated a color feature vector based on RGB color, and this RGB color characteristic vector is as follows:
f1(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t)
Wherein, R (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on red color passage, G (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on green color channel, B (i, j) |tRepresent video coordinate (i, j) pixel at place pixel value on blue color channels when t frame;
S12: the RGB color of video is converted to hsv color space;
S13: each pixel of video is generated a color feature vector based on hsv color space, and this hsv color characteristic vector is as follows:
f2(i, j) |t=(H (i, j) |t, S (i, j) |t, V (i, j) |t)
Wherein, H (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on tone passage, S (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on saturation passage, V (i, j) |tRepresent video when t frame at coordinate (i, j) pixel at place pixel value in luminance channel;
S14: connected with the color feature vector based on hsv color space by the color feature vector based on RGB color, generates a color feature vector based on double; two color spaces, and this color feature vector is expressed as follows:
f3(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t, H (i, j) |t, S (i, j) |t, V (i, j) |t)。
Step S2 comprises the following steps:
S21: video is carried out gray processing process, is converted into gray processing video by video;
S22: each pixel in gray processing video is built background model;
S23: the number of times of the significance gray-value variation occurred on each pixel in statistics gray processing video, significance gray-value variation is defined as: the gray-value variation amplitude on a pixel exceeds the gray value normal variation scope on this pixel set by background model, namely beyond gray value normal variation scope set by background model on this pixel once, the significance gray-value variation number of times of this pixel adds 1 to the gray-value variation amplitude on a pixel;
S24: calculating the dynamic of each pixel in gray processing video, the computing formula of the dynamic of pixel is:
D ( i , j ) | t = ψ ( i , j ) | t t
Wherein, Ψ (i, j) |tRepresent gray processing video from start frame to t frame time internal coordinate (i, j) number of times of the significance gray-value variation occurred on the pixel at place;D (i, j) |tRepresent coordinate (i, j) there is the frequency of significance gray-value variation in the pixel at place in gray processing video is from start frame to t frame time, i.e. coordinate (i, j) dynamic of place's pixel, the dynamic of pixel refers to the frequency that significance gray-value variation occurs on pixel, in dynamic low expression video, the scene changes at this pixel place is little, and dynamic height represents that in video, the scene changes at this pixel place is big.
Step S3 comprises the following steps:
S31: video is converted into gray processing video, uses original LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 1st textural characteristics value W of this pixel1(i, j) |t, original LBP operator is as shown in Figure 2;
S32: use circular LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 2nd textural characteristics value W of this pixel2(i, j) |t, circular LBP operator is as shown in Figure 3;
S33: (i, j) the 1st textural characteristics value of the pixel at place and the 2nd textural characteristics value are combined as the texture feature vector of this pixel, it may be assumed that f at coordinate when t frame by gray processing video4(i, j) |t=(W1(i, j) |t, W2(i, j) |t)。
Step S4 comprises the following steps:
S41: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, obtains fusion feature vector:
F (i, j) |t=(D (i, j) |t, R (i, j) |t, G (i, j) |t, B (i, j) |t, H (i, j) |t, S (i, j) |t, V (i, j) |t, W1(i, j) |t, W2(i, j) |t);
S42: the fusion feature vector f on all pixels in video when adopting clustering method to t frame (i, j) |tCarry out automatic cluster analysis;
S43: the pixel corresponding to all fusion feature vectors being classified as a class by cluster analysis is divided into same region, completes the region segmentation of visual field in video.
The dynamic of pixel refers to the frequency that significance gray-value variation occurs on pixel, in dynamic low expression video, the scene changes at this pixel place is little, dynamic height represents that in video, the scene changes at this pixel place is big, adopt the field of view dividing method merged based on video features layer, fully utilize the dynamic feature on video image vegetarian refreshments time dimension and the color characteristic on Spatial Dimension and textural characteristics, make use of the Space Time Joint Distribution feature of visual information in video, the deficiency of the temporal dynamic property feature of visual information cannot be utilized when overcoming video capture image region segmentation method, this method is applicable to the color video of the various resolution that visual field is fixing and carries out field of view segmentation.

Claims (5)

1. the field of view dividing method merged based on video features layer, it is characterised in that comprise the following steps:
S1: calculate the color characteristic of each pixel in video;
S2: calculate the dynamic feature of each pixel in video;
S3: calculate the textural characteristics of each pixel in video;
S4: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, carries out region segmentation according to merging gained feature to the visual field in video.
2. a kind of field of view dividing method merged based on video features layer according to claim 1, it is characterised in that described step S1 comprises the following steps:
S11: each pixel of video is generated a color feature vector based on RGB color, and this RGB color characteristic vector is as follows:
f1(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t)
Wherein, R (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on red color passage, G (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on green color channel, B (i, j) |tRepresent video coordinate (i, j) pixel at place pixel value on blue color channels when t frame;
S12: the RGB color of video is converted to hsv color space;
S13: each pixel of video is generated a color feature vector based on hsv color space, and this hsv color characteristic vector is as follows:
f2(i, j) |t=(H (i, j) |t, S (i, j) |t, V (i, j) |t)
Wherein, H (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on tone passage, S (i, j) |tRepresent video when t frame coordinate (i, j) pixel at place pixel value on saturation passage, V (i, j) |tRepresent video when t frame at coordinate (i, j) pixel at place pixel value in luminance channel;
S14: connected with the color feature vector based on hsv color space by the color feature vector based on RGB color, generates a color feature vector based on double; two color spaces, and this color feature vector is expressed as follows:
f3(i, j) |t=(R (i, j) |t, G (i, j) |t, B (i, j) |t, H (i, j) |t, S (i, j) |t, V (i, j) |t)。
3. a kind of field of view dividing method merged based on video features layer according to claim 1, it is characterised in that described step S2 comprises the following steps:
S21: video is converted into gray processing video;
S22: each pixel in gray processing video is built background model;
S23: the number of times of the significance gray-value variation occurred on each pixel in statistics gray processing video, significance gray-value variation is defined as: the gray-value variation amplitude on a pixel exceeds the gray value normal variation scope on this pixel set by background model;
S24: calculating the dynamic of each pixel in gray processing video, the computing formula of the dynamic of pixel is:
D ( i , j ) | t = ψ ( i , j ) | t t
Wherein, ψ (i, j) |tRepresent gray processing video from start frame to t frame time internal coordinate (i, j) number of times of the significance gray-value variation occurred on the pixel at place;D (i, j) |t(i, j) there is the frequency of significance gray-value variation in the pixel at place in gray processing video is from start frame to t frame time, i.e. coordinate (i, j) dynamic of place's pixel to represent coordinate.
4. a kind of field of view dividing method merged based on video features layer according to claim 1 or 2 or 3, it is characterised in that described step S3 comprises the following steps:
S31: video is converted into gray processing video, uses original LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 1st textural characteristics value W of this pixel1(i, j) |t
S32: use circular LBP operator to calculate gray processing video when t frame at coordinate (i, j) the LBP texture value of the pixel at place, and it can be used as the 2nd textural characteristics value W of this pixel2(i, j) |t
S33: (i, j) the 1st textural characteristics value of the pixel at place and the 2nd textural characteristics value are combined as the texture feature vector of this pixel, it may be assumed that f at coordinate when t frame by gray processing video4(i, j) |t=(W1(i, j) |t, W2(i, j) |t)。
5. a kind of field of view dividing method merged based on video features layer according to claim 1 or 2 or 3, it is characterised in that described step S4 comprises the following steps:
S41: the dynamic feature of pixel each in video, color characteristic and textural characteristics are carried out Feature-level fusion, obtains fusion feature vector;
S42: when adopting clustering method to t frame, in video, the fusion feature vector on all pixels carries out automatic cluster analysis;
S43: the pixel corresponding to all fusion feature vectors being classified as a class by cluster analysis is divided into same region, completes the region segmentation of visual field in video.
CN201610072608.4A 2016-02-01 2016-02-01 A kind of field of view dividing method based on the fusion of video features layer Expired - Fee Related CN105741322B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201610072608.4A CN105741322B (en) 2016-02-01 2016-02-01 A kind of field of view dividing method based on the fusion of video features layer

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201610072608.4A CN105741322B (en) 2016-02-01 2016-02-01 A kind of field of view dividing method based on the fusion of video features layer

Publications (2)

Publication Number Publication Date
CN105741322A true CN105741322A (en) 2016-07-06
CN105741322B CN105741322B (en) 2018-08-03

Family

ID=56242193

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201610072608.4A Expired - Fee Related CN105741322B (en) 2016-02-01 2016-02-01 A kind of field of view dividing method based on the fusion of video features layer

Country Status (1)

Country Link
CN (1) CN105741322B (en)

Cited By (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509830A (en) * 2017-02-28 2018-09-07 华为技术有限公司 A kind of video data handling procedure and equipment
CN110796073A (en) * 2019-10-28 2020-02-14 衢州学院 Method and device for detecting specific target area in non-texture scene video
CN110807783A (en) * 2019-10-28 2020-02-18 衢州学院 Efficient field-of-view region segmentation method and device for achromatic long video
CN110807398A (en) * 2019-10-28 2020-02-18 衢州学院 Method and device for dividing field area
CN110826445A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110826446A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN110827293A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting achromatic scene area based on decision-making layer fusion
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN110910399A (en) * 2019-10-28 2020-03-24 衢州学院 Non-texture scene region segmentation method and device based on decision layer fusion
CN110910398A (en) * 2019-10-28 2020-03-24 衢州学院 Video complex scene region segmentation method and device based on decision layer fusion
CN111028262A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-channel composite high-definition high-speed video background modeling method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316328A (en) * 2007-05-29 2008-12-03 中国科学院计算技术研究所 News anchor lens detection method based on space-time strip pattern analysis
CN102426583A (en) * 2011-10-10 2012-04-25 北京工业大学 Chinese medicine tongue manifestation retrieval method based on image content analysis
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN105118049A (en) * 2015-07-22 2015-12-02 东南大学 Image segmentation method based on super pixel clustering

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101316328A (en) * 2007-05-29 2008-12-03 中国科学院计算技术研究所 News anchor lens detection method based on space-time strip pattern analysis
CN102426583A (en) * 2011-10-10 2012-04-25 北京工业大学 Chinese medicine tongue manifestation retrieval method based on image content analysis
CN102915544A (en) * 2012-09-20 2013-02-06 武汉大学 Video image motion target extracting method based on pattern detection and color segmentation
CN105118049A (en) * 2015-07-22 2015-12-02 东南大学 Image segmentation method based on super pixel clustering

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
康达辉: "基于SOFM的视频对象分割算法的研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
王兰莎: "多目标矿业复杂图像特征提取与分类", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
郭恒光 等: "基于颜色特征和纹理特征的磨粒彩色图像分割", 《润滑与密封》 *

Cited By (17)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108509830A (en) * 2017-02-28 2018-09-07 华为技术有限公司 A kind of video data handling procedure and equipment
CN108509830B (en) * 2017-02-28 2020-12-01 华为技术有限公司 Video data processing method and device
CN110826445A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110910398A (en) * 2019-10-28 2020-03-24 衢州学院 Video complex scene region segmentation method and device based on decision layer fusion
CN110807783A (en) * 2019-10-28 2020-02-18 衢州学院 Efficient field-of-view region segmentation method and device for achromatic long video
CN110826446A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN110827293A (en) * 2019-10-28 2020-02-21 衢州学院 Method and device for segmenting achromatic scene area based on decision-making layer fusion
CN110866460A (en) * 2019-10-28 2020-03-06 衢州学院 Method and device for detecting specific target area in complex scene video
CN110910399A (en) * 2019-10-28 2020-03-24 衢州学院 Non-texture scene region segmentation method and device based on decision layer fusion
CN110807398A (en) * 2019-10-28 2020-02-18 衢州学院 Method and device for dividing field area
CN110807783B (en) * 2019-10-28 2023-07-18 衢州学院 Efficient visual field region segmentation method and device for achromatic long video
CN110826446B (en) * 2019-10-28 2020-08-21 衢州学院 Method and device for segmenting field of view region of texture-free scene video
CN110796073A (en) * 2019-10-28 2020-02-14 衢州学院 Method and device for detecting specific target area in non-texture scene video
CN110826445B (en) * 2019-10-28 2021-04-23 衢州学院 Method and device for detecting specific target area in colorless scene video
CN110796073B (en) * 2019-10-28 2021-05-25 衢州学院 Method and device for detecting specific target area in non-texture scene video
CN110910398B (en) * 2019-10-28 2021-07-20 衢州学院 Video complex scene region segmentation method and device based on decision layer fusion
CN111028262A (en) * 2019-12-06 2020-04-17 衢州学院 Multi-channel composite high-definition high-speed video background modeling method

Also Published As

Publication number Publication date
CN105741322B (en) 2018-08-03

Similar Documents

Publication Publication Date Title
CN105741322A (en) Region segmentation method of field of view on the basis of video feature layer fusion
CN102611828B (en) Real-time enhanced processing system for foggy continuous video image
CN106550244A (en) The picture quality enhancement method and device of video image
EP3678056B1 (en) Skin color detection method and device and storage medium
CN103248906B (en) Method and system for acquiring depth map of binocular stereo video sequence
CN105069801A (en) Method for preprocessing video image based on image quality diagnosis
CN103871041A (en) Image super-resolution reconstruction method based on cognitive regularization parameters
CN106937120A (en) Object-based monitor video method for concentration
CN103914820A (en) Image haze removal method and system based on image layer enhancement
CN103886565A (en) Nighttime color image enhancement method based on purpose optimization and histogram equalization
CN102457724B (en) Image motion detecting system and method
Lee et al. Color image enhancement using histogram equalization method without changing hue and saturation
CN103065282A (en) Image fusion method based on sparse linear system
CN103106671B (en) Method for detecting interested region of image based on visual attention mechanism
CN102223545B (en) Rapid multi-view video color correction method
CN110717892A (en) Tone mapping image quality evaluation method
CN110135274B (en) Face recognition-based people flow statistics method
JP2013196681A (en) Method and device for extracting color feature
CN102163277A (en) Area-based complexion dividing method
CN109558506B (en) Image retrieval method based on color aggregation vector
CN104952052A (en) Method for enhancing EMCCD image
CN103971365A (en) Extraction method for image saliency map
CN111079689A (en) Fingerprint image enhancement method
CN110910398B (en) Video complex scene region segmentation method and device based on decision layer fusion
Parihar et al. UndarkGAN: Low-light Image Enhancement with Cycle-consistent Adversarial Networks

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20180803

Termination date: 20210201