CN110084752B - Image super-resolution reconstruction method based on edge direction and K-means clustering - Google Patents

Image super-resolution reconstruction method based on edge direction and K-means clustering Download PDF

Info

Publication number
CN110084752B
CN110084752B CN201910371191.5A CN201910371191A CN110084752B CN 110084752 B CN110084752 B CN 110084752B CN 201910371191 A CN201910371191 A CN 201910371191A CN 110084752 B CN110084752 B CN 110084752B
Authority
CN
China
Prior art keywords
resolution image
low
resolution
image
image block
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910371191.5A
Other languages
Chinese (zh)
Other versions
CN110084752A (en
Inventor
李晓峰
李爽
周宁
许埕秸
傅志中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
University of Electronic Science and Technology of China
Original Assignee
University of Electronic Science and Technology of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by University of Electronic Science and Technology of China filed Critical University of Electronic Science and Technology of China
Priority to CN201910371191.5A priority Critical patent/CN110084752B/en
Publication of CN110084752A publication Critical patent/CN110084752A/en
Application granted granted Critical
Publication of CN110084752B publication Critical patent/CN110084752B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4053Scaling of whole images or parts thereof, e.g. expanding or contracting based on super-resolution, i.e. the output image resolution being higher than the sensor resolution
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20021Dividing image into blocks, subimages or windows
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an image super-resolution reconstruction method based on edge direction and K-means clustering, and belongs to the technical field of machine vision and image processing. The processing steps of the invention include: s1: constructing a high-low resolution image block training set; s2: extracting edge direction feature vectors of the low-resolution image blocks; s3: clustering edge direction features by using K-means clustering; s4: classifying the high-resolution image blocks and the low-resolution image blocks; s5: training a linear mapping matrix for each category using ridge regression; s6: the input low resolution image is enlarged. The invention fully utilizes the edge amplitude and direction characteristics of the image blocks, can more accurately classify the image blocks by adopting K-means clustering, ensures the quality of the reconstructed image, has lower calculation complexity in the reconstruction process, and is convenient and quick to realize.

Description

Image super-resolution reconstruction method based on edge direction and K-means clustering
Technical Field
The invention belongs to the field of computer vision and image processing, and particularly relates to an image super-resolution reconstruction method based on edge direction and K-means clustering.
Background
With the development of technology, more and more playing devices supporting ultra-high definition images are appeared on the market. But the acquisition device is expensive, resulting in a lack of sufficient ultra-high resolution images in the application market. The image super-resolution is a technology for improving the image resolution by using a software method, and can well solve the problems only with lower cost, so that the method has important research value and application prospect in multiple fields such as multimedia, medical treatment, satellite remote sensing, military and the like.
At present, many researchers at home and abroad research the image super-resolution reconstruction technology, and according to different theoretical bases selected by the researchers, the super-resolution method can be divided into three types: super-resolution methods based on interpolation, reconstruction and learning.
The image super-resolution algorithm based on interpolation is the most intuitive and basic method, and the method generally deduces the value of the pixel of the point to be interpolated through the values of surrounding neighborhood pixels. Interpolation-based methods generally have small calculation cost, so that the interpolation-based methods are commonly used for amplifying images in daily software, but reconstructed images are low in quality, and the edges of the images are easy to saw.
Because interpolation technology has limited improvement on image resolution, the requirements of certain applications cannot be met, and super-resolution processing technology based on reconstruction appears. Reconstruction-based methods typically utilize correlation information between multiple low resolution images and add appropriate prior knowledge to super-resolution. Researchers introduce theory of disciplines such as ensemble theory and probability theory into super-resolution algorithms in order to obtain high-resolution images of relatively high quality.
In recent years, a learning-based super-resolution method becomes a research hotspot, and many learning-based super-resolution algorithms are proposed and have better image reconstruction quality than the conventional interpolation and reconstruction techniques. Because the former is different, the machine learning-based method learns and obtains the corresponding relation between the high-resolution image block and the low-resolution image block by training a large number of high-resolution image sets, the obtained priori knowledge is more accurate than the artificial assumption, and the mapping relation obtained by training can reflect the relation between the high-resolution image block and the low-resolution image block. Learning-based methods, however, typically have a high computational overhead and are therefore difficult to implement quickly on hardware.
The interpolation method is focused on pursuing the speed of image reconstruction, and the learning rule pursues the quality of the reconstructed image, but the existing super-resolution method is difficult to achieve better balance between the reconstruction quality and calculation overhead.
Disclosure of Invention
The invention aims at: aiming at the problems, the image super-resolution reconstruction method which has better balance between reconstruction quality and calculation overhead is provided.
The image super-resolution reconstruction method based on edge direction and K-means clustering comprises the following steps:
step one, collecting a high-resolution image data set;
performing degradation treatment on high-resolution and low-resolution images in the high-resolution image dataset to obtain a corresponding low-resolution image dataset;
converting the high-low resolution image into a YUV image, and dividing the Y-channel high-low resolution image to obtain a high-resolution image block set with the same image block number
Figure BDA0002050011330000021
And low resolution image block set->
Figure BDA0002050011330000022
Wherein i represents image block identification, t represents image distinguishing symbol, n represents total number of image blocks obtained by segmentation, and the number and the size of the acquired high-resolution images are determined to form a high-resolution image block pair and a low-resolution image block pair between the image blocks>
Figure BDA0002050011330000023
The segmentation mode of the high-resolution image is as follows: dividing the high resolution image into n image blocks of the same size and adjacent to each other based on a preset image block size (e.g., 2×2,3×3, etc.);
the segmentation mode of the low resolution image is as follows: dividing the low resolution image into n image blocks of the same size and overlapping each other based on a preset image block size (e.g., 3×3,5×5,7×7,9×9, etc.);
extracting edge direction feature vectors of each low-resolution image block:
further dividing the low resolution image block in the first step into low resolution image sub-blocks
Figure BDA0002050011330000024
Wherein r represents the total number of the segmented image sub-blocks, the size of the low resolution image block is determined, a gradient operator is adopted to calculate the horizontal and vertical gradient values of the low resolution image sub-blocks, and the edge amplitude m of the low resolution image sub-blocks is calculated by using the horizontal and vertical gradient values j And edge direction a j Combining the edge magnitudes and directions of all sub-blocks of the same low resolution image block to form an edge direction feature vector f= [ a ] 1 m 1 …a r m r ] T
Clustering edge direction feature vectors of the low-resolution image blocks by adopting a K-means clustering method, and calculating each category to obtain a center point c k K=1, …, and K, wherein K is the number of center points, and is determined by the super-resolution reconstruction result, the number of clusters which can obtain the best super-resolution reconstruction quality is selected, and the center points of the clusters are maximally dispersed in the feature space, so that a center point set is stored;
step four, for each pair of high-low resolution image block pairs
Figure BDA0002050011330000025
Calculating the distances between the edge direction feature vectors of the low-resolution image blocks in the image block pairs and K center points respectively, and adding the high-resolution image block pairs +.>
Figure BDA0002050011330000026
Assigning to the category nearest to the center point;
step five, for each category, calculating (for example, using a ridge regression method) a linear mapping matrix m capable of converting the low resolution image block into a high resolution image block k ,k=1,…,K, and storing;
step six, inputting a low-resolution image to be reconstructed, and converting the low-resolution image into a YUV image after edge processing;
dividing the Y channel image into low-resolution image blocks according to the dividing mode of the low-resolution image in the first step, extracting edge direction feature vectors of the low-resolution image blocks, and distributing the current low-resolution image blocks into categories closest to the center points based on the distances between the edge direction feature vectors and K center points;
based on the linear mapping matrix m corresponding to each category k Performing super-resolution reconstruction on the Y-channel low-resolution image block to obtain a Y-channel high-resolution image block, combining the high-resolution image block to obtain a Y-channel high-resolution image, performing super-resolution of the same multiple on the UV-channel low-resolution image by adopting a bicubic interpolation method, and converting the high-resolution YUV image into an RGB image to obtain a reconstruction result;
further, in step six, based on the linear mapping matrix m corresponding to each category k The super-resolution reconstruction of the Y-channel low-resolution image block is specifically: h is a i =m k l i The method comprises the steps of carrying out a first treatment on the surface of the Wherein l i Column vector formed by vectorizing ith low-resolution image block, m k A linear mapping matrix of the kth class, h i And (5) vectorizing the ith high-resolution image block to form a column vector.
Further, in the sixth step, the edge adding process specifically includes: and (e-1)/2 edges are added to the periphery of the low-resolution image (e is the length of a low-resolution image block), and the added edge value is zero or the value of the outermost pixel of the low-resolution image.
In the invention, in order to remarkably reduce the running time of reconstruction processing and improve the reconstruction effect, only the reconstruction mode of the step S2 of the invention is adopted for the Y channel to obtain a corresponding Y channel high-resolution image, and the other two channels (UV channels) are subjected to super-resolution reconstruction with the same multiple by adopting the existing bicubic interpolation method. Of course, in order to further consider the reconstruction effect, the high-resolution image reconstruction of the corresponding channel can also be performed on the UV channel by adopting the reconstruction mode of the high-resolution image of the Y channel, that is, the high-resolution image and the low-resolution image of the U, V channel are respectively segmented to obtain the same high-resolution image block pair of the image block, the edge direction feature vector of the low-resolution image block is extracted and K-means clustering is performed on the edge direction feature vector of the low-resolution image block, and then the distances between the high-resolution image block pair and the low-resolution image block pair based on each clustering center (center point) are distributed into different categories; and constructing a linear mapping matrix of the corresponding U, V channel in each category, then, based on the edge direction feature vector of the corresponding image block of the U, V channel of the low-resolution image to be reconstructed, distributing the corresponding category (the category closest to the center point) to the corresponding image block, and combining the corresponding linear mapping matrix to obtain the reconstructed U, V channel high-resolution image. Only this approach loses some of its computational overhead, but its computational overhead is still lower than that of the existing learning-based super-resolution algorithm.
In summary, due to the adoption of the technical scheme, the beneficial effects of the invention are as follows:
according to the invention, the edge amplitude and the direction of the sub-image blocks are calculated and combined feature vectors are formed, so that the edge direction information of the sub-image blocks can be fully utilized, and the feature extraction method has lower complexity; the K-means clustering is used for clustering edge direction feature vectors, the clustering number can be flexibly selected so as to obtain a better reconstruction effect, and the method is low in calculation cost and convenient to quickly realize on hardware.
Drawings
FIG. 1 is a training phase flow chart of an image super-resolution algorithm based on edge direction and K-means clustering;
FIG. 2 is a flow chart of the low resolution image restoration of the present invention;
FIG. 3 is a low resolution image for an embodiment having an image width of 144 and a height of 144;
fig. 4 is a high resolution image for an embodiment having an image width of 288 and a height of 288.
Detailed Description
The present invention will be described in further detail with reference to the embodiments and the accompanying drawings, for the purpose of making the objects, technical solutions and advantages of the present invention more apparent.
The image super-resolution reconstruction method based on the edge direction and the K-means clustering comprises a training stage and a low-resolution image reconstruction. Referring to fig. 1, the specific process of the training phase (step S1) is as follows;
step S101: collecting a high-resolution image data set, generating a corresponding low-resolution data set by adopting a preselected image degradation model, converting a high-resolution image and a low-resolution image from an RGB channel to a YUV channel, dividing the high-resolution image and the low-resolution image of a Y channel, and setting a specific dividing mode based on the preset dividing block number n (the number and the size of the high-resolution images are determined) and the size of the image to be divided.
In the present embodiment, a high-resolution image in a high-resolution image dataset is divided into a set of adjacent image blocks of size 2×2
Figure BDA0002050011330000041
The low resolution image is divided into 7 x 7 size image block sets overlapping each other using sliding window +.>
Figure BDA0002050011330000042
Namely, the number of image blocks of the segmented high-resolution image and the segmented low-resolution image is the same;
step S102: for the middle area of the low resolution image block in step S101 (the area of 3×3 size in the middle in this embodiment), the sliding window is used to divide the image block into 4 low resolution image sub-blocks of 2×2 size
Figure BDA0002050011330000043
Calculating horizontal and vertical gradient values of the low-resolution sub-image block by adopting a gradient operator, and calculating the edge amplitude m of the low-resolution sub-image block by using the horizontal and vertical gradient values j And edge direction a j Combining the edge magnitudes and directions of all sub-blocks of the same low resolution image block to form an edge direction feature vectorf=[a 1 m 1 …a 4 m 4 ] T The method comprises the steps of carrying out a first treatment on the surface of the I.e. the edge direction feature vector f as the feature vector f for each low resolution image block.
Step S103: the number of clusters of the K-means clustering method is set, and in this embodiment, k=512 is set. Then, clustering the features of all the image blocks (low-resolution image block feature vector f) obtained in the step S102 by adopting a K-means clustering method, and calculating each category to obtain a center point c k K=1, …,512, and the central points of the clusters are maximally dispersed in the feature space, the central point set is saved;
step S104: extracting edge direction feature vectors of low-resolution image blocks in each image block pair by adopting the method in the step S102, calculating the distance between the edge direction feature vectors and each center point in the step S103, and distributing the high-resolution image blocks and the low-resolution image blocks into categories with the nearest distance based on the distance between the edge direction feature vectors of the corresponding low-resolution image blocks and each center point;
step S105: for each category, computing a linear mapping matrix m capable of converting low resolution image blocks into high resolution image blocks using ridge regression based on the high and low resolution image block pairs included in each category k K=1, …,512 and saved.
Step S2: low resolution image reconstruction.
Referring to fig. 2, first, a low resolution image to be reconstructed, as shown in fig. 3, with a size of 144×144, is input, and is converted into a YUV image after the original image is subjected to an edge processing.
In this embodiment, the edge processing specifically includes: 3 edges are added around the original image, and the added edges have zero values or are the values of the outermost pixels of the low-resolution image.
Then, the Y-channel image is divided into low resolution image blocks l in accordance with the method of step S101 i ,i=1,…,144 2 The low resolution image blocks are classified according to the methods of step S102 and step S104, and the corresponding linear mapping function (i.e., the corresponding linear mapping function of each class is usedLinear mapping matrix of (a) to perform super-resolution reconstruction on the Y-channel low-resolution image block to obtain a Y-channel high-resolution image block h i ,i=1,…,144 2 The method comprises the steps of carrying out a first treatment on the surface of the Wherein the calculation formula of the high-resolution image block of the corresponding channel is as follows: h is a i =m k l i The method comprises the steps of carrying out a first treatment on the surface of the Wherein l i Column vector formed by vectorizing ith low-resolution image block, m k And h is the k-th linear mapping matrix obtained in the step five i And (5) vectorizing the ith high-resolution image block to form a column vector.
Finally, combining the high-resolution image blocks to obtain a Y-channel high-resolution image, and for a UV-channel low-resolution image, performing super-resolution of the same multiple by adopting a bicubic interpolation method, converting the high-resolution YUV image into an RGB image to obtain a reconstructed high-resolution image, wherein the size of the reconstructed high-resolution image is 288 multiplied by 288 as shown in fig. 4, and the reconstructed image has a good effect.
While the invention has been described in terms of specific embodiments, any feature disclosed in this specification may be replaced by alternative features serving the equivalent or similar purpose, unless expressly stated otherwise; all of the features disclosed, or all of the steps in a method or process, except for mutually exclusive features and/or steps, may be combined in any manner.

Claims (6)

1. The image super-resolution reconstruction method based on the edge direction and the K-means clustering is characterized by comprising the following steps of:
step one, collecting a high-resolution image data set;
performing degradation treatment on high-resolution and low-resolution images in the high-resolution image dataset to obtain a corresponding low-resolution image dataset;
converting the high-low resolution image into a YUV image, and dividing the Y-channel high-low resolution image to obtain a high-resolution image block set with the same image block number
Figure FDA0004105485590000011
And low resolution image block set->
Figure FDA0004105485590000012
Wherein i represents image block identification, t represents image identifier, n represents total number of image blocks obtained by segmentation, and high-low resolution image block pair is formed between the image blocks>
Figure FDA0004105485590000013
The segmentation mode of the high-resolution image is as follows: dividing the high-resolution image into n image blocks which are identical in size and mutually adjacent based on a preset image block size;
the segmentation mode of the low resolution image is as follows: dividing the low-resolution image into n image blocks which are identical in size and overlap each other based on a preset image block size;
extracting edge direction feature vectors of each low-resolution image block:
aggregating low resolution image blocks
Figure FDA0004105485590000014
Each low resolution image block in the image is further divided into low resolution image sub-blocks
Figure FDA0004105485590000015
Wherein r represents the total number of the segmented image sub-blocks;
calculating horizontal and vertical gradient values of the low-resolution sub-image block by adopting a gradient operator, and calculating the edge amplitude m of the low-resolution sub-image block by using the horizontal and vertical gradient values j And edge direction a j The edge amplitude and the edge direction of all image subblocks of the same low-resolution image block are combined to form an edge direction feature vector f= [ a ] 1 m 1 … a r m r ] T
Clustering edge direction feature vectors of the low-resolution image blocks by adopting a K-means clustering method, and calculating each category to obtain a center point c k K=1, …, K, and willThe center point set is stored, wherein K is the number of preset center points;
step four, for each pair of high-low resolution image block pairs
Figure FDA0004105485590000016
Calculating the distances between the edge direction feature vectors of the low-resolution image blocks in the image block pairs and K center points respectively, and adding the high-resolution image block pairs +.>
Figure FDA0004105485590000017
Assigning to the category nearest to the center point;
step five, for each category, calculating a linear mapping matrix m capable of converting the low resolution image block into the high resolution image block k K=1, …, K and saved;
step six, inputting a low-resolution image to be reconstructed, and converting the low-resolution image into a YUV image after edge processing;
dividing the Y channel image into low-resolution image blocks according to the dividing mode of the low-resolution image in the first step, extracting edge direction feature vectors of the low-resolution image blocks, and distributing the current low-resolution image blocks into categories closest to the center points based on the distances between the edge direction feature vectors and K center points;
based on the linear mapping matrix m corresponding to each category k And performing super-resolution reconstruction on the Y-channel low-resolution image block to obtain a Y-channel high-resolution image block, combining the high-resolution image block to obtain a Y-channel high-resolution image, performing super-resolution of the same multiple on the UV-channel low-resolution image by adopting a bicubic interpolation method, and converting the high-resolution YUV image into an RGB image to obtain a reconstruction result.
2. The method of claim 1, wherein in step six, the linear mapping matrix m corresponding to each category is based on k The super-resolution reconstruction of the Y-channel low-resolution image block is specifically: h is a i =m k l i The method comprises the steps of carrying out a first treatment on the surface of the Wherein l i Column vector formed by vectorizing ith low-resolution image block, m k A linear mapping matrix of the kth class, h i And (5) vectorizing the ith high-resolution image block to form a column vector.
3. The method according to claim 1 or 2, wherein in step six, the edge processing is specifically: and (e-1)/2 edges are added to the periphery of the low-resolution image, wherein the added edges have zero value or are the values of the outermost pixels of the low-resolution image, and e is the length of the low-resolution image block.
4. The method according to claim 1, wherein the high resolution image reconstruction mode of the UV channel of the low resolution image to be reconstructed in step six is replaced by: and carrying out high-resolution image reconstruction of the corresponding channel by adopting a reconstruction mode of the high-resolution image of the Y channel.
5. The method of claim 1, wherein the edge direction feature vector of the low resolution image block is extracted by dividing a middle region of the low resolution image block into r low resolution image sub-blocks having the same size.
6. The method of claim 5, wherein the middle region of the low resolution image block is divided into r low resolution image sub-blocks having the same size and overlapping each other.
CN201910371191.5A 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering Active CN110084752B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910371191.5A CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910371191.5A CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Publications (2)

Publication Number Publication Date
CN110084752A CN110084752A (en) 2019-08-02
CN110084752B true CN110084752B (en) 2023-04-21

Family

ID=67418759

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910371191.5A Active CN110084752B (en) 2019-05-06 2019-05-06 Image super-resolution reconstruction method based on edge direction and K-means clustering

Country Status (1)

Country Link
CN (1) CN110084752B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112801879B (en) * 2021-02-09 2023-12-08 咪咕视讯科技有限公司 Image super-resolution reconstruction method and device, electronic equipment and storage medium

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104036519A (en) * 2014-07-03 2014-09-10 中国计量学院 Partitioning compressive sensing reconstruction method based on image block clustering and sparse dictionary learning
CN104699781A (en) * 2015-03-12 2015-06-10 西安电子科技大学 Specific absorption rate image retrieval method based on double-layer anchor chart hash
CN105321156A (en) * 2015-11-26 2016-02-10 三维通信股份有限公司 Multi-structure-based image restoration method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108648147A (en) * 2018-05-08 2018-10-12 北京理工大学 A kind of super-resolution image acquisition method and system of human eye retina's mechanism
CN108805814A (en) * 2018-06-07 2018-11-13 西安电子科技大学 Image Super-resolution Reconstruction method based on multiband depth convolutional neural networks

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
TW200407799A (en) * 2002-11-05 2004-05-16 Ind Tech Res Inst Texture partition and transmission method for network progressive transmission and real-time rendering by using the wavelet coding algorithm
US7187811B2 (en) * 2003-03-18 2007-03-06 Advanced & Wise Technology Corp. Method for image resolution enhancement
US20060291750A1 (en) * 2004-12-16 2006-12-28 Peyman Milanfar Dynamic reconstruction of high resolution video from low-resolution color-filtered video (video-to-video super-resolution)
US8081256B2 (en) * 2007-03-20 2011-12-20 Samsung Electronics Co., Ltd. Method and system for edge directed deinterlacing in video image processing
CN101727568B (en) * 2008-10-10 2013-04-17 索尼(中国)有限公司 Foreground action estimation device and foreground action estimation method
JP5388779B2 (en) * 2009-09-28 2014-01-15 京セラ株式会社 Image processing apparatus, image processing method, and image processing program
US8861853B2 (en) * 2010-03-19 2014-10-14 Panasonic Intellectual Property Corporation Of America Feature-amount calculation apparatus, feature-amount calculation method, and program
US20120075440A1 (en) * 2010-09-28 2012-03-29 Qualcomm Incorporated Entropy based image separation
US8755636B2 (en) * 2011-09-14 2014-06-17 Mediatek Inc. Method and apparatus of high-resolution image reconstruction based on multi-frame low-resolution images
CN102800094A (en) * 2012-07-13 2012-11-28 南京邮电大学 Fast color image segmentation method
CN103049750B (en) * 2013-01-11 2016-06-15 广州广电运通金融电子股份有限公司 Character identifying method
CN103984946B (en) * 2014-05-23 2017-04-26 北京联合大学 High resolution remote sensing map road extraction method based on K-means
CN105761207B (en) * 2015-05-08 2018-11-16 西安电子科技大学 Image Super-resolution Reconstruction method based on the insertion of maximum linear block neighborhood
CN104992407B (en) * 2015-06-17 2018-03-16 清华大学深圳研究生院 A kind of image super-resolution method
KR101845476B1 (en) * 2015-06-30 2018-04-05 한국과학기술원 Image conversion apparatus and image conversion method thereof
CN106558022B (en) * 2016-11-30 2020-08-25 重庆大学 Single image super-resolution reconstruction method based on edge difference constraint
CN108335265B (en) * 2018-02-06 2021-05-07 上海通途半导体科技有限公司 Rapid image super-resolution reconstruction method and device based on sample learning
CN108764368B (en) * 2018-06-07 2021-11-30 西安邮电大学 Image super-resolution reconstruction method based on matrix mapping
CN109712153A (en) * 2018-12-25 2019-05-03 杭州世平信息科技有限公司 A kind of remote sensing images city superpixel segmentation method
CN112801879B (en) * 2021-02-09 2023-12-08 咪咕视讯科技有限公司 Image super-resolution reconstruction method and device, electronic equipment and storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103077505A (en) * 2013-01-25 2013-05-01 西安电子科技大学 Image super-resolution reconstruction method based on dictionary learning and structure clustering
CN103295196A (en) * 2013-05-21 2013-09-11 西安电子科技大学 Super-resolution image reconstruction method based on non-local dictionary learning and biregular terms
CN104036519A (en) * 2014-07-03 2014-09-10 中国计量学院 Partitioning compressive sensing reconstruction method based on image block clustering and sparse dictionary learning
CN104699781A (en) * 2015-03-12 2015-06-10 西安电子科技大学 Specific absorption rate image retrieval method based on double-layer anchor chart hash
CN105321156A (en) * 2015-11-26 2016-02-10 三维通信股份有限公司 Multi-structure-based image restoration method
CN107341776A (en) * 2017-06-21 2017-11-10 北京工业大学 Single frames super resolution ratio reconstruction method based on sparse coding and combinatorial mapping
CN107392855A (en) * 2017-07-19 2017-11-24 苏州闻捷传感技术有限公司 Image Super-resolution Reconstruction method based on sparse autoencoder network Yu very fast study
CN108648147A (en) * 2018-05-08 2018-10-12 北京理工大学 A kind of super-resolution image acquisition method and system of human eye retina's mechanism
CN108805814A (en) * 2018-06-07 2018-11-13 西安电子科技大学 Image Super-resolution Reconstruction method based on multiband depth convolutional neural networks

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
Zeng-Wei Ju et al..Image segmentation based on edge detection using K-means and an improved ant colony optimization.《2013 International Conference on Machine Learning and Cybernetics,Tianjin,China》.2013,第297-303页. *
康凯.图像超分辨率重建研究.《中国博士学位论文全文数据库 信息科技辑》.2016,(第9期),第I138-18页. *
赵志辉 ; 赵瑞珍 ; 岑翼刚 ; 张凤珍 ; .基于稀疏表示与线性回归的图像快速超分辨率重建.智能系统学报.2017,第12卷(第01期),第8-14页. *

Also Published As

Publication number Publication date
CN110084752A (en) 2019-08-02

Similar Documents

Publication Publication Date Title
CN106611427B (en) Saliency detection method based on candidate region fusion
CN110866896B (en) Image saliency target detection method based on k-means and level set super-pixel segmentation
CN106096547B (en) A kind of low-resolution face image feature super resolution ratio reconstruction method towards identification
CN109614922A (en) A kind of dynamic static gesture identification method and system
Deng et al. Lau-net: Latitude adaptive upscaling network for omnidirectional image super-resolution
CN108629783B (en) Image segmentation method, system and medium based on image feature density peak search
CN107610093B (en) Full-reference image quality evaluation method based on similarity feature fusion
CN111179193B (en) Dermatoscope image enhancement and classification method based on DCNNs and GANs
CN104637066B (en) The quick framework extraction method of bianry image based on sequential refinement
CN103049340A (en) Image super-resolution reconstruction method of visual vocabularies and based on texture context constraint
CN108876716A (en) Super resolution ratio reconstruction method and device
CN110084752B (en) Image super-resolution reconstruction method based on edge direction and K-means clustering
CN109543525B (en) Table extraction method for general table image
CN109741358B (en) Superpixel segmentation method based on adaptive hypergraph learning
CN108830283B (en) Image feature point matching method
Rui et al. Research on fast natural aerial image mosaic
CN106056575B (en) A kind of image matching method based on like physical property proposed algorithm
CN111127407B (en) Fourier transform-based style migration forged image detection device and method
CN116310452B (en) Multi-view clustering method and system
CN111191659B (en) Multi-shape clothes hanger identification method for clothing production system
CN107818579B (en) Color texture feature extraction method based on quaternion Gabor filtering
CN113192003B (en) Spliced image quality evaluation method
CN106228553A (en) High-resolution remote sensing image shadow Detection apparatus and method
CN106651864B (en) A kind of dividing method towards high-resolution remote sensing image
CN112989919B (en) Method and system for extracting target object from image

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant