WO2020181641A1 - 图像放大方法及图像放大装置 - Google Patents

图像放大方法及图像放大装置 Download PDF

Info

Publication number
WO2020181641A1
WO2020181641A1 PCT/CN2019/085764 CN2019085764W WO2020181641A1 WO 2020181641 A1 WO2020181641 A1 WO 2020181641A1 CN 2019085764 W CN2019085764 W CN 2019085764W WO 2020181641 A1 WO2020181641 A1 WO 2020181641A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
training
resolution
interpolation
pixel
Prior art date
Application number
PCT/CN2019/085764
Other languages
English (en)
French (fr)
Inventor
朱江
赵斌
周明忠
吴宇
Original Assignee
深圳市华星光电技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳市华星光电技术有限公司 filed Critical 深圳市华星光电技术有限公司
Publication of WO2020181641A1 publication Critical patent/WO2020181641A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4007Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/403Edge-driven scaling; Edge-based scaling

Definitions

  • the present invention relates to the field of display technology, and in particular to an image enlargement method and image enlargement device.
  • the current image digital input equipment is to sample the tiny area on the image to generate corresponding pixel points to form a dot matrix image data, that is, the data that can be obtained for a fixed image input condition and a fixed image
  • the amount is relatively fixed.
  • interpolation magnification methods are generally interpolation magnification.
  • Typical interpolation magnification methods include nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, and polynomial interpolation.
  • the nearest neighbor interpolation algorithm is the simplest, but the nearest neighbor interpolation algorithm is also
  • the pixel value is most likely to be discontinuous, which leads to blocking effects, which in turn causes image blur, and the image quality after magnification is generally not ideal.
  • the bilinear interpolation algorithm is more complicated.
  • the bilinear interpolation algorithm does not appear to be discontinuous in pixel values.
  • the enlarged image quality is higher, but it can change the edge contours and details of each subject in the image to a certain extent. It is fuzzy, and the algorithms of bicubic interpolation and polynomial interpolation are more complicated.
  • the same algorithm is usually used to interpolate and enlarge the flat area and the edge area of the image.
  • a simpler algorithm is selected for calculation, it will often cause obvious jagged or distortion in the final image;
  • a more complex algorithm is used for calculation, although image distortion can be avoided, the entire calculation process takes a longer time and requires higher hardware, which cannot take into account both the image magnification effect and the magnification cost.
  • the object of the present invention is to provide an image enlargement method, which can realize smooth transition of image edges, improve image enlargement effect, and reduce image enlargement cost.
  • the object of the present invention is also to provide an image magnifying device, which can realize smooth transition of image edges, improve image magnification effect, and reduce image magnification cost.
  • the present invention provides an image magnification method, which includes the following steps:
  • Step S1 Obtain an original image with a first resolution
  • Step S2 Perform interpolation and amplification on the original image by using a preset first interpolation algorithm to obtain a first transition image with a second resolution, where the second resolution is greater than the first resolution;
  • Step S3 Perform interpolation and amplification on the original image by using a preset second interpolation algorithm, and perform smoothing processing on the image after interpolation and amplification to obtain a second transition image with a second resolution;
  • Step S4 Perform edge detection on the original image to obtain edge information of the original image
  • Step S5 Establish a weight output model, and input the edge information of the original image into the weight output model to generate the fusion weight of the target image;
  • Step S6 fusing the first transition image and the second transition image according to the fusion weight and the preset fusion formula to obtain a target image with a second resolution.
  • the first interpolation algorithm is a nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or a polynomial interpolation algorithm
  • the second interpolation algorithm is a nearest neighbor interpolation algorithm
  • the method of smoothing in step S3 is to use a preset smoothing operator to convolve the image after interpolation and magnification in step S3;
  • the smoothing operator is any one of matrix 1 to matrix 5.
  • the original image includes a plurality of original pixels arranged in an array
  • the first transition image includes a plurality of first pixels arranged in an array
  • the second transition image includes a plurality of second pixels arranged in an array
  • the target image includes a plurality of target pixels arranged in an array
  • the edge information of the original image includes the edge information of each original pixel in the original image
  • step S5 the edge information corresponding to each original pixel is input into the weight output model to generate the fusion weight of the target pixel corresponding to the position of the original pixel;
  • the preset fusion formula is:
  • Vp (1- ⁇ ) ⁇ Vcb+ ⁇ Vs;
  • Vp is the gray value of the target pixel
  • Vcb is the gray value of the first pixel corresponding to the position of the target pixel
  • Vs is the gray value of the second pixel corresponding to the position of the target pixel
  • is the fusion weight of the target pixel, 0 ⁇ 1.
  • the step of establishing a weight output model in step S5 specifically includes: acquiring a plurality of pieces of training data, and generating the weight value output model through machine learning training according to the multiple pieces of training data;
  • the method for obtaining the multiple pieces of training data is:
  • the training image including a plurality of training pixels arranged in an array
  • each training target image includes a plurality of training target pixels arranged in an array
  • the standard target image includes a plurality of standard target pixels arranged in an array
  • each training data includes a standard fusion weight corresponding to a standard target pixel and edge information of the training pixel corresponding to the standard target pixel.
  • the step S5 further includes: dividing the target image into multiple regions, calculating the average value of the fusion weight of each target pixel in each region, and using the average value as the fusion weight of each target pixel in the region .
  • the present invention also provides an image magnification device, including: an acquisition unit, a first magnification unit connected to the acquisition unit, a second magnification unit connected to the acquisition unit, an edge detection unit connected to the acquisition unit, A weight generating unit connected to the edge detection unit and a fusion unit connected to the first amplifying unit, the second amplifying unit, and the weight generating unit;
  • the acquiring unit is used to acquire an original image with a first resolution
  • the first amplifying unit is configured to interpolate and amplify the original image by using a preset first interpolation algorithm to obtain a first transitional image with a second resolution, where the second resolution is greater than the first resolution;
  • the second magnifying unit is configured to perform interpolation and magnification on the original image by using a preset second interpolation algorithm, and perform smoothing processing on the interpolated and magnified image to obtain a second transition image with a second resolution;
  • the edge detection unit is configured to perform edge detection on the original image to generate edge information of the original image
  • the weight generation unit is used to establish a weight output model, and input edge information of the original image into the weight output model to generate a fusion weight of the target image;
  • the fusion unit is configured to fuse the first transition image and the second transition image according to the fusion weight and a preset fusion formula to obtain a target image with a second resolution.
  • the first interpolation algorithm is nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or polynomial interpolation, and the second interpolation algorithm is nearest neighbor interpolation;
  • the second amplification unit performs smoothing processing by using a preset smoothing operator to perform convolution on the image that is interpolated and amplified by the second amplification unit;
  • the smoothing operator is any one of matrix 1 to matrix 5.
  • the original image includes a plurality of original pixels arranged in an array
  • the first transition image includes a plurality of first pixels arranged in an array
  • the second transition image includes a plurality of second pixels arranged in an array
  • the target image includes a plurality of target pixels arranged in an array
  • the edge information of the original image generated by the edge detection unit specifically includes the edge information of each original pixel in the original image
  • the weight generation unit inputs the edge information corresponding to each original pixel into the weight output model, and generates the fusion weight value of the target pixel corresponding to the position of the original pixel;
  • the preset fusion formula in the fusion unit is:
  • Vp (1- ⁇ ) ⁇ Vcb+ ⁇ Vs;
  • Vp is the gray value of the target pixel
  • Vcb is the gray value of the first pixel corresponding to the position of the target pixel
  • Vs is the gray value of the second pixel corresponding to the position of the target pixel
  • is the fusion weight of the target pixel, 0 ⁇ 1.
  • the weight generation unit obtains multiple pieces of training data, and generates the weight value output model through machine learning training according to the multiple pieces of training data;
  • the obtaining multiple pieces of training data specifically includes:
  • the training image including a plurality of training pixels arranged in an array
  • each training target image includes a plurality of training target pixels arranged in an array
  • the standard target image includes a plurality of standard target pixels arranged in an array
  • each training data includes a standard fusion weight corresponding to a standard target pixel and edge information of the training pixel corresponding to the standard target pixel.
  • the weight generation unit is also used to divide the target image into multiple regions, and calculate the average value of the fusion weight of each target pixel in each region, and use the average value as the value of each target pixel in the region. Fusion weights.
  • the present invention provides an image enlargement method.
  • the image enlargement method includes the following steps: obtaining an original image with a first resolution; performing interpolation and amplification on the original image by a preset first interpolation algorithm to obtain a first transition image with a second resolution, The second resolution is greater than the first resolution; the original image is interpolated and amplified by a preset second interpolation algorithm, and the interpolated and amplified image is smoothed to obtain a second transition image with the second resolution; Perform edge detection on the original image to obtain the edge information of the original image; establish a weight output model, and input the edge information of the original image into the weight output model to generate the fusion weight of the target image; according to the fusion weight and The preset fusion formula fuses the first transition image and the second transition image to obtain a target image with a second resolution, which can achieve smooth transition of image edges, improve image magnification effects, and reduce image magnification costs.
  • the present invention also provides an image enlargement device,
  • Figure 1 is a flowchart of the image magnification method of the present invention
  • Figure 2 is a schematic diagram of the image magnifying device of the present invention.
  • the present invention provides an image magnification method, including the following steps:
  • Step S1 Obtain an original image with a first resolution.
  • the original image includes a plurality of original pixels arranged in an array.
  • Step S2 Perform interpolation and amplification on the original image by using a preset first interpolation algorithm to obtain a first transition image with a second resolution, where the second resolution is greater than the first resolution.
  • the first interpolation algorithm is a nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or polynomial interpolation algorithm.
  • the first transition image includes a plurality of first pixels arranged in an array.
  • Step S3 Perform interpolation and amplification on the original image through a preset second interpolation algorithm, and perform smoothing processing on the interpolation and amplification image to obtain a second transition image with a second resolution.
  • the second transition image includes a plurality of second pixels arranged in an array.
  • the second interpolation algorithm is a nearest neighbor interpolation algorithm.
  • the method of the smoothing process is to use a preset smoothing operator to perform convolution on the image after interpolation and amplification in step S3;
  • the smoothing operator is any one of matrix 1 to matrix 5.
  • Step S4 Perform edge detection on the original image to obtain edge information of the original image.
  • the edge detection of the original image is performed by the Sobel operator.
  • the edge information of the original image includes the edge information of each original pixel in the original image.
  • Step S5 Establish a weight output model, and input the edge information of the original image into the weight output model to generate the fusion weight of the target image.
  • the target image includes a plurality of target pixels arranged in an array.
  • a weight output model is established through machine learning.
  • step of establishing a weight output model in step S5 specifically includes: acquiring multiple pieces of training data, and generating the weight output model through machine learning training according to the multiple pieces of training data, and the training data can reflect The edge information of the original image is associated with the fusion weight of the target image, so that the weight output model can be generated through machine learning training.
  • the method for obtaining the multiple pieces of training data is:
  • the training image including a plurality of training pixels arranged in an array
  • each training target image includes a plurality of training target pixels arranged in an array
  • the standard target image includes a plurality of standard target pixels arranged in an array
  • each training data includes a standard fusion weight corresponding to a standard target pixel and edge information of the training pixel corresponding to the standard target pixel.
  • the edge information corresponding to each original pixel is input into the weight output model, and the fusion weight of the target pixel corresponding to the position of the original pixel is generated.
  • Step S6 fusing the first transition image and the second transition image according to the fusion weight and the preset fusion formula to obtain a target image with a second resolution.
  • the preset fusion formula is:
  • Vp (1- ⁇ ) ⁇ Vcb+ ⁇ Vs;
  • Vp is the gray value of the target pixel
  • Vcb is the gray value of the first pixel corresponding to the position of the target pixel
  • Vs is the gray value of the second pixel corresponding to the position of the target pixel
  • is the fusion weight of the target pixel, 0 ⁇ 1.
  • the original pixel, the first pixel, the second pixel, and the target pixel all include a red component, a green component, and a blue component.
  • the red component, blue component, and The gray value of the green component is processed to obtain the gray values of the red component, the blue component, and the green component of the first pixel and the second pixel, and in the step S6, the first pixel and the second pixel are respectively fused
  • the gray values of the red, blue, and green components obtain the gray values of the red, green, and blue components of the target pixel.
  • the image transition is smoother.
  • an averaging step is further included. Specifically, the target image is divided into multiple regions, and each region is calculated The average value of the fusion weight value of each target pixel in, and the average value is used as the fusion weight value of each target pixel in the area.
  • each area includes 3 ⁇ 3 pixels.
  • the present invention uses two methods to enlarge the original image to generate the first transition image and the second transition image.
  • the first transition image has a better magnification effect on the flat area
  • the second transition image has a better effect on the edge area.
  • the magnification effect is better, and then a weight output model is established through a machine learning algorithm, and a fusion weight value related to the edge information of the original image is output, and the first transition image and the second transition image are fused.
  • the target pixel is biased to the edge area
  • the proportion of the second transition image is larger.
  • the proportion of the first transition image when the first transition image is fused with the second transition image is relatively large. Large, can achieve smooth transition of image edges, improve image magnification effect, reduce image magnification cost, simple solution design, easy to make into corresponding chips, and low cost, the fusion weight output by the weight output model established by machine learning is accurate High sex.
  • the present invention also provides an image magnifying device, including: an acquiring unit 10, a first magnifying unit 20 connected to the acquiring unit 10, a second magnifying unit 30 connected to the acquiring unit 10, and
  • the edge detection unit 40 connected to the acquisition unit 10, the weight generation unit 50 connected to the edge detection unit 40, and the first amplification unit 20, the second amplification unit 30, and the weight generation unit 50 are all connected Fusion unit 60;
  • the acquiring unit 10 is configured to acquire an original image with a first resolution
  • the first amplification unit 20 is configured to perform interpolation and amplification on the original image by using a preset first interpolation algorithm to obtain a first transition image with a second resolution, where the second resolution is greater than the first resolution;
  • the second amplifying unit 30 is configured to interpolate and amplify the original image by using a preset second interpolation algorithm, and perform smoothing processing on the interpolated and amplified image to obtain a second transition image with a second resolution;
  • the edge detection unit 40 is configured to perform edge detection on the original image to generate edge information of the original image
  • the weight generating unit 50 is used to establish a weight output model, and input edge information of the original image into the weight output model to generate a fusion weight of the target image;
  • the fusion unit 60 is configured to fuse the first transition image and the second transition image according to the fusion weight and a preset fusion formula to obtain a target image with a second resolution.
  • the first interpolation algorithm is nearest neighbor interpolation, bilinear interpolation, bicubic interpolation, or polynomial interpolation
  • the second interpolation algorithm is nearest neighbor interpolation
  • the second amplifying unit 30 performs smoothing processing by using a preset smoothing operator to perform convolution on the image that has been interpolated and amplified by the second amplifying unit 30;
  • the smoothing operator is any one of matrix 1 to matrix 5.
  • the edge detection unit 40 performs edge detection on the original image through a Sobel operator.
  • the original image includes a plurality of original pixels arranged in an array
  • the first transition image includes a plurality of first pixels arranged in an array
  • the second transition image includes a plurality of second pixels arranged in an array
  • the target image includes a plurality of target pixels arranged in an array
  • the edge information of the original image generated by the edge detection unit 40 specifically includes the edge information of each original pixel in the original image
  • the weight generation unit 50 inputs the edge information corresponding to each original pixel into the weight output model, and generates the fusion weight value of the target pixel corresponding to the position of the original pixel;
  • the preset fusion formula in the fusion unit 60 is:
  • Vp 1- ⁇ Vcb+ ⁇ Vs
  • Vp is the gray value of the target pixel
  • Vcb is the gray value of the first pixel corresponding to the position of the target pixel
  • Vs is the gray value of the second pixel corresponding to the position of the target pixel
  • is the fusion weight of the target pixel, 0 ⁇ 1.
  • the weight generation unit 50 obtains multiple pieces of training data, and generates the weight value output model through machine learning training according to the multiple pieces of training data;
  • the obtaining multiple pieces of training data specifically includes:
  • the training image including a plurality of training pixels arranged in an array
  • each training target image includes a plurality of training target pixels arranged in an array
  • the standard target image includes a plurality of standard target pixels arranged in an array
  • each training data includes a standard fusion weight corresponding to a standard target pixel and edge information of the training pixel corresponding to the standard target pixel.
  • the original pixel, the first pixel, the second pixel, and the target pixel all include a red component, a green component, and a blue component.
  • the first amplifying unit 20 and the second amplifying unit 30 respectively compare the red color of the original pixel.
  • the gray values of the red component, the blue component and the green component of the first pixel and the second pixel are processed to obtain the gray values of the first pixel and the second pixel, and the fusion unit 60 fuses the first
  • the gray values of the red, blue, and green components of the pixel and the second pixel obtain the gray values of the red, green, and blue components of the target pixel.
  • the image transition is smoother.
  • an averaging step is further included. Specifically, the target image is divided into multiple regions, and each region is calculated The average value of the fusion weight value of each target pixel in, and the average value is used as the fusion weight value of each target pixel in the area.
  • each area includes 3 ⁇ 3 pixels.
  • the present invention uses two methods to enlarge the original image to generate the first transition image and the second transition image.
  • the first transition image has a better magnification effect on the flat area
  • the second transition image has a better effect on the edge area.
  • the magnification effect is better, and then a weight output model is established through a machine learning algorithm, and a fusion weight value related to the edge information of the original image is output, and the first transition image and the second transition image are fused.
  • the target pixel is biased to the edge area
  • the proportion of the second transition image is larger.
  • the proportion of the first transition image when the first transition image is fused with the second transition image is relatively large. Large, can achieve smooth transition of image edges, improve image magnification effect, reduce image magnification cost, simple solution design, easy to make into corresponding chips, and low cost, the fusion weight output by the weight output model established by machine learning is accurate High sex.
  • the present invention provides an image magnification method.
  • the image enlargement method includes the following steps: obtaining an original image with a first resolution; performing interpolation and amplification on the original image by a preset first interpolation algorithm to obtain a first transition image with a second resolution, The second resolution is greater than the first resolution; the original image is interpolated and amplified by a preset second interpolation algorithm, and the interpolated and amplified image is smoothed to obtain a second transition image with the second resolution; Perform edge detection on the original image to obtain the edge information of the original image; establish a weight output model, and input the edge information of the original image into the weight output model to generate the fusion weight of the target image; according to the fusion weight and The preset fusion formula fuses the first transition image and the second transition image to obtain a target image with a second resolution, which can achieve smooth transition of image edges, improve image magnification effects, and reduce image magnification costs.
  • the present invention also provides an image magnifying device, which can realize smooth

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Image Processing (AREA)
  • Editing Of Facsimile Originals (AREA)

Abstract

本发明提供一种图像放大方法及图像放大装置。该图像放大方法包括如下步骤:获取具有第一分辨率的原图像;通过预设的第一插值算法对原图像进行插值放大,得到具有第二分辨率的第一过渡图像,第二分辨率大于第一分辨率;通过预设的第二插值算法对原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;对原图像进行边缘检测,得到原图像的边缘信息;建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;根据融合权值和预设的融合公式融合第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。

Description

图像放大方法及图像放大装置 技术领域
本发明涉及显示技术领域,尤其涉及一种图像放大方法及图像放大装置。
背景技术
随着计算机技术,现代通讯技术的高速发展,在人类社会进入信息时代的今天,人们对图像信息的需求也越来越迫切。目前的图像数字化输入设备都是通过采样图像上的微小区域,产生对应的像素点,从而形成一个点阵化的图像数据,即对于固定的图像输入条件和固定的图像而言,可获取的数据量是相对固定的。
而随着技术的发展和市场需求的变化,消费者对显示器件的显示质量的要求越来越高,相应的,显示器件的分辨率也越来越高,高解析度的视频及信号需求亦越来越大。但是,目前尚有很多的视频文件和信号源分辨率比较低,这些低分辨率的视频文件及信号源在高分辨率显示器件进行显示时,需要经过放大处理。由于图像放大的性能直接决定了视频显示的质量,因此视频显示系统迫切需要高质量的图像放大方法来提高用户的视觉体验。
目前,常用的图像放大方法一般为插值放大,典型的插值放大方法包括最邻近插值、双线性插值、双三次插值及多项式插值等,其中,最邻近插值算法最简单,但最邻近插值算法也最易产生像素值不连续,从而导致块效应,进而造成图像模糊,放大后图像质量效果一般不够理想。双线性插值算法较为复杂,双线性插值算法不会出现像素值不连续的情况,放大后的图像质量较高,但能会使图像中各个主题的边缘轮廓和细节部分在一定程度上变得模糊,而双三次插值及多项式插值的算法则又比较复杂。
进一步地,现有技术中,通常对图像的平坦区域及边缘区域采用相同的算法进行插值放大,当选用较简单的算法进行运算时,往往使得最终呈现的图像中存在明显的锯齿或失真;当选用较复杂的算法进行运算时,虽然能避免图像的失真,但是整个运算过程所耗费的时间较长,且对硬件的要求更高,无法兼顾图像放大效果与放大成本。
发明内容
本发明的目的在于提供一种图像放大方法,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。
本发明的目的还在于提供一种图像放大装置,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。
为实现上述目的,本发明提供了一种图像放大方法,包括如下步骤:
步骤S1、获取具有第一分辨率的原图像;
步骤S2、通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;
步骤S3、通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;
步骤S4、对所述原图像进行边缘检测,得到所述原图像的边缘信息;
步骤S5、建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;
步骤S6、根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值算法,所述第二插值算法为最邻近插值算法;
所述步骤S3中平滑处理的方式为利用预设的平滑算子对步骤S3中插值放大后的图像进行卷积;
其中,所述平滑算子为矩阵1至矩阵5中的任一个:
Figure PCTCN2019085764-appb-000001
所述原图像包括阵列排布的多个原像素,所述第一过渡图像包括阵列排布的多个第一像素,所述第二过渡图像包括阵列排布的多个第二像素,所述目标图像包括阵列排布的多个目标像素;
所述步骤S4中,所述原图像的边缘信息包括所述原图像中各个原像素的边缘信息;
所述步骤S5中,将各个原像素对应的边缘信息输入权值输出模型,产生与该原像素的位置相对应目标像素的融合权值;
所述步骤S6中,所述预设的融合公式为:
Vp=(1-λ)×Vcb+λ×Vs;
其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对 应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
所述步骤S5中建立权值输出模型的步骤具体包括:获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型;
其中,所述获取所述多条训练数据的方法为:
提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
所述步骤S5还包括:将所述目标图像划分为多个区域,计算每一区域中的各个目标像素的融合权值的均值,并以该均值作为该区域中的各个目标像素的融合权值。
本发明还提供一种图像放大装置,包括:获取单元、与所述获取单元相连的第一放大单元、与所述获取单元相连的第二放大单元、与所述获取单元相连的边缘检测单元、与所述边缘检测单元相连的权值产生单元以及与所述第一放大单元、第二放大单元及权值产生单元均相连的融合单元;
所述获取单元用于获取具有第一分辨率的原图像;
所述第一放大单元用于通过预设的第一插值算法对所述原图像进行插 值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;
所述第二放大单元用于通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;
所述边缘检测单元用于对所述原图像进行边缘检测,产生所述原图像的边缘信息;
所述权值产生单元用于建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;
所述融合单元用于根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值,所述第二插值算法为最邻近插值;
所述第二放大单元进行平滑处理的方式为利用预设的平滑算子对通过第二放大单元插值放大后的图像进行卷积;
其中,所述平滑算子为矩阵1至矩阵5中的任一个:
Figure PCTCN2019085764-appb-000002
所述原图像包括阵列排布的多个原像素,所述第一过渡图像包括阵列排布的多个第一像素,所述第二过渡图像包括阵列排布的多个第二像素,所述目标图像包括阵列排布的多个目标像素;
所述边缘检测单元产生的所述原图像的边缘信息具体包括所述原图像中各个原像素的边缘信息;
所述权值产生单元将各个原像素对应的边缘信息输入权值输出模型,产生与该原像素的位置相对应目标像素的融合权值;
所述融合单元中预设的融合公式为:
Vp=(1-λ)×Vcb+λ×Vs;
其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
所述权值产生单元通过获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型;
其中,所述获取多条训练数据具体包括:
提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
所述权值产生单元还用于将所述目标图像划分为多个区域,并计算每一区域中的各个目标像素的融合权值的均值,并以该均值作为该区域中的各个目标像素的融合权值。
本发明的有益效果:本发明提供一种图像放大方法。所述图像放大方法包括如下步骤:获取具有第一分辨率的原图像;通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;对所述原图像进行边缘检测,得到所述原图像的边缘信息;建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。本发明还提供一种 图像放大装置,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。
附图说明
为了能更进一步了解本发明的特征以及技术内容,请参阅以下有关本发明的详细说明与附图,然而附图仅提供参考与说明用,并非用来对本发明加以限制。
附图中,
图1为本发明的图像放大方法的流程图;
图2为本发明的图像放大装置的示意图。
具体实施方式
为更进一步阐述本发明所采取的技术手段及其效果,以下结合本发明的优选实施例及其附图进行详细描述。
请参阅图1,本发明提供一种图像放大方法,包括如下步骤:
步骤S1、获取具有第一分辨率的原图像。
具体地,所述原图像包括阵列排布的多个原像素。
步骤S2、通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率。
具体地,所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值算法。
进一步地,所述第一过渡图像包括阵列排布的多个第一像素。
步骤S3、通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像。
具体地,所述第二过渡图像包括阵列排布的多个第二像素。
进一步地,所述第二插值算法为最邻近插值算法。
具体地,所述平滑处理的方式为利用预设的平滑算子对步骤S3中插值放大后的图像进行卷积;
其中,所述平滑算子为矩阵1至矩阵5中的任一个:
Figure PCTCN2019085764-appb-000003
步骤S4、对所述原图像进行边缘检测,得到所述原图像的边缘信息。
具体地,所述步骤S4中通过索贝尔(Sobel)算子对所述原图像进行边缘检测。
具体地,所述步骤S4中,所述原图像的边缘信息包括所述原图像中各个原像素的边缘信息。
步骤S5、建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值。
具体地,所述目标图像包括阵列排布的多个目标像素。
具体地,所述步骤S5中通过机器学习建立权值输出模型。
进一步地,所述步骤S5中建立权值输出模型的步骤具体包括:获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型,所述训练数据能够反映原图像的边缘信息与目标图像的融合权值的关联,从而能够通过机器学习训练产生所述权值输出模型。
具体地,所述获取所述多条训练数据的方法为:
提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
具体地,所述步骤S5中,将各个原像素对应的边缘信息输入权值输出 模型,产生与该原像素的位置相对应目标像素的融合权值。
步骤S6、根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
具体地,所述步骤S6中,所述预设的融合公式为:
Vp=(1-λ)×Vcb+λ×Vs;
其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
具体地,所述原像素、第一像素、第二像素及目标像素均包括红色分量、绿色分量及蓝色分量,所述步骤S2及步骤S3中分别对原像素的红色分量、蓝色分量及绿色分量的灰度值进行处理得到所述第一像素和第二像素红色分量、蓝色分量及绿色分量的灰度值,而所述步骤S6中,分别融合所述第一像素和第二像素红色分量、蓝色分量及绿色分量的灰度值得到目标像素的红色分量、绿色分量及蓝色分量的灰度值。
进一步地,图像过渡更加平缓,所述步骤S5中在获得各个目标像素的融合权值之后,还包括一求均的步骤,具体为将所述目标图像划分为多个区域,计算每一区域中的各个目标像素的融合权值的均值,并以该均值作为该区域中的各个目标像素的融合权值,优选地,每一区域包括3×3个像素。
需要说明的是,本发明的分别通过两种方法放大原图像产生第一过渡图像与第二过渡图像,其中第一过渡图像的针对平坦区域放大效果更好,第二过渡图像的针对边缘区域的放大效果更好,再通过机器学习算法建立一权值输出模型,输出一与原图像的边缘信息相关的融合权值,融合所述第一过渡图像与第二过渡图像,当目标像素偏向边缘区域时,则第一过渡图像与第二过渡图像融合时第二过渡图像的占比较大,当目标像素偏向平坦区域时,则第一过渡图像与第二过渡图像融合时第一过渡图像的占比较大,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本,方案设计简单,易于制作成相应的芯片,且成本较低,通过机器学习建立的权值输出模型输出的融合权值准确性较高。
请参阅图2,本发明还提供一种图像放大装置,包括:获取单元10、与所述获取单元10相连的第一放大单元20、与所述获取单元10相连的第二放大单元30、与所述获取单元10相连的边缘检测单元40、与所述边缘检测单元40相连的权值产生单元50以及与所述第一放大单元20、第二放大单元30及权值产生单元50均相连的融合单元60;
所述获取单元10用于获取具有第一分辨率的原图像;
所述第一放大单元20用于通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;
所述第二放大单元30用于通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;
所述边缘检测单元40用于对所述原图像进行边缘检测,产生所述原图像的边缘信息;
所述权值产生单元50用于建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;
所述融合单元60用于根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
具体地,所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值,所述第二插值算法为最邻近插值;
所述第二放大单元30进行平滑处理的方式为利用预设的平滑算子对通过第二放大单元30插值放大后的图像进行卷积;
其中,所述平滑算子为矩阵1至矩阵5中的任一个:
Figure PCTCN2019085764-appb-000004
具体地,所述边缘检测单元40通过索贝尔(Sobel)算子对所述原图像进行边缘检测。
具体地,所述原图像包括阵列排布的多个原像素,所述第一过渡图像包括阵列排布的多个第一像素,所述第二过渡图像包括阵列排布的多个第二像素,所述目标图像包括阵列排布的多个目标像素;
所述边缘检测单元40产生的所述原图像的边缘信息具体包括所述原图像中各个原像素的边缘信息;
所述权值产生单元50将各个原像素对应的边缘信息输入权值输出模型,产生与该原像素的位置相对应目标像素的融合权值;
所述融合单元60中预设的融合公式为:
Vp=1-λ×Vcb+λ×Vs;
其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对 应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
具体地,所述权值产生单元50通过获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型;
其中,所述获取多条训练数据具体包括:
提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
具体地,所述原像素、第一像素、第二像素及目标像素均包括红色分量、绿色分量及蓝色分量,所述第一放大单元20及第二放大单元30中分别对原像素的红色分量、蓝色分量及绿色分量的灰度值进行处理得到所述第一像素和第二像素红色分量、蓝色分量及绿色分量的灰度值,而所述融合单元60分别融合所述第一像素和第二像素红色分量、蓝色分量及绿色分量的灰度值得到目标像素的红色分量、绿色分量及蓝色分量的灰度值。
进一步地,图像过渡更加平缓,所述步骤S5中在获得各个目标像素的融合权值之后,还包括一求均的步骤,具体为将所述目标图像划分为多个区域,计算每一区域中的各个目标像素的融合权值的均值,并以该均值作 为该区域中的各个目标像素的融合权值,优选地,每一区域包括3×3个像素。
需要说明的是,本发明的分别通过两种方法放大原图像产生第一过渡图像与第二过渡图像,其中第一过渡图像的针对平坦区域放大效果更好,第二过渡图像的针对边缘区域的放大效果更好,再通过机器学习算法建立一权值输出模型,输出一与原图像的边缘信息相关的融合权值,融合所述第一过渡图像与第二过渡图像,当目标像素偏向边缘区域时,则第一过渡图像与第二过渡图像融合时第二过渡图像的占比较大,当目标像素偏向平坦区域时,则第一过渡图像与第二过渡图像融合时第一过渡图像的占比较大,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本,方案设计简单,易于制作成相应的芯片,且成本较低,通过机器学习建立的权值输出模型输出的融合权值准确性较高。
综上所述,本发明提供一种图像放大方法。所述图像放大方法包括如下步骤:获取具有第一分辨率的原图像;通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;对所述原图像进行边缘检测,得到所述原图像的边缘信息;建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。本发明还提供一种图像放大装置,能够实现图像边缘平滑过渡,提升图像放大效果,降低图像放大成本。
以上所述,对于本领域的普通技术人员来说,可以根据本发明的技术方案和技术构思作出其他各种相应的改变和变形,而所有这些改变和变形都应属于本发明权利要求的保护范围。

Claims (10)

  1. 一种图像放大方法,包括如下步骤:
    步骤S1、获取具有第一分辨率的原图像;
    步骤S2、通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;
    步骤S3、通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;
    步骤S4、对所述原图像进行边缘检测,得到所述原图像的边缘信息;
    步骤S5、建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;
    步骤S6、根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
  2. 如权利要求1所述的图像放大方法,其中,所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值算法,所述第二插值算法为最邻近插值算法;
    所述步骤S3中平滑处理的方式为利用预设的平滑算子对步骤S3中插值放大后的图像进行卷积;
    其中,所述平滑算子为矩阵1至矩阵5中的任一个:
    Figure PCTCN2019085764-appb-100001
  3. 如权利要求1所述的图像放大方法,其中,所述原图像包括阵列排布的多个原像素,所述第一过渡图像包括阵列排布的多个第一像素,所述第二过渡图像包括阵列排布的多个第二像素,所述目标图像包括阵列排布的多个目标像素;
    所述步骤S4中,所述原图像的边缘信息包括所述原图像中各个原像素的边缘信息;
    所述步骤S5中,将各个原像素对应的边缘信息输入权值输出模型,产生与该原像素的位置相对应目标像素的融合权值;
    所述步骤S6中,所述预设的融合公式为:
    Vp=(1-λ)×Vcb+λ×Vs;
    其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
  4. 如权利要求3所述的图像放大方法,其中,所述步骤S5中建立权值输出模型的步骤具体包括:获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型;
    其中,所述获取所述多条训练数据的方法为:
    提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
    对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
    通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
    通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
    选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
    提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
    确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
    以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
    形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
  5. 如权利要求3所述的图像放大方法,其中,所述步骤S5还包括:将所述目标图像划分为多个区域,计算每一区域中的各个目标像素的融合权值的均值,并以该均值作为该区域中的各个目标像素的融合权值。
  6. 一种图像放大装置,包括:获取单元、与所述获取单元相连的第一放大单元、与所述获取单元相连的第二放大单元、与所述获取单元相连的边缘检测单元、与所述边缘检测单元相连的权值产生单元以及与所述第一放大单元、第二放大单元及权值产生单元均相连的融合单元;
    所述获取单元用于获取具有第一分辨率的原图像;
    所述第一放大单元用于通过预设的第一插值算法对所述原图像进行插值放大,得到具有第二分辨率的第一过渡图像,所述第二分辨率大于第一分辨率;
    所述第二放大单元用于通过预设的第二插值算法对所述原图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡图像;
    所述边缘检测单元用于对所述原图像进行边缘检测,产生所述原图像的边缘信息;
    所述权值产生单元用于建立权值输出模型,并将原图像的边缘信息输入权值输出模型,产生目标图像的融合权值;
    所述融合单元用于根据融合权值和预设的融合公式融合所述第一过渡图像和第二过渡图像,得到具有第二分辨率的目标图像。
  7. 如权利要求6所述的图像放大装置,其中,所述第一插值算法为最邻近插值、双线性插值、双三次插值或多项式插值,所述第二插值算法为最邻近插值;
    所述第二放大单元进行平滑处理的方式为利用预设的平滑算子对通过第二放大单元插值放大后的图像进行卷积;
    其中,所述平滑算子为矩阵1至矩阵5中的任一个:
    Figure PCTCN2019085764-appb-100002
  8. 如权利要求6所述的图像放大装置,其中,所述原图像包括阵列排布的多个原像素,所述第一过渡图像包括阵列排布的多个第一像素,所述第二过渡图像包括阵列排布的多个第二像素,所述目标图像包括阵列排布的多个目标像素;
    所述边缘检测单元产生的所述原图像的边缘信息具体包括所述原图像中各个原像素的边缘信息;
    所述权值产生单元将各个原像素对应的边缘信息输入权值输出模型,产生与该原像素的位置相对应目标像素的融合权值;
    所述融合单元中预设的融合公式为:
    Vp=(1-λ)×Vcb+λ×Vs;
    其中,所述Vp为目标像素的灰度值,Vcb为与该目标像素的位置相对 应的第一像素的灰度值,Vs为与该目标像素的位置相对应的第二像素的灰度值,λ为该目标像素的融合权值,0≤λ≤1。
  9. 如权利要求8所述的图像放大装置,其中,所述权值产生单元通过获取多条训练数据,并根据所述多条训练数据通过机器学习训练产生所述权值输出模型;
    其中,所述获取多条训练数据具体包括:
    提供具有第一分辨率的训练图像,所述训练图像包括阵列排布的多个训练像素;
    对所述训练图像进行边缘检测,获取各个训练像素的边缘信息;
    通过预设的第一插值算法对所述训练图像进行插值放大,得到具有第二分辨率的第一过渡训练图像;
    通过预设的第二插值算法对所述训练图像进行插值放大,并对插值放大后的图像进行平滑处理,得到具有第二分辨率的第二过渡训练图像;
    选取多个不同的融合权值,按照所述融合公式及所述多个不同的融合权值融合所述第一过渡训练图像及第二过渡训练图像,产生多个具有第二分辨率的训练目标图像,每一训练目标图像均包括阵列排布的多个训练目标像素;
    提供该训练图像对应的具有第二分辨率的标准目标图像,所述标准目标图像包括阵列排布的多个标准目标像素;
    确定每一标准目标像素的灰度值和与该标准目标像素处于相同位置的各个训练目标像素的灰度值的差值最小的训练目标像素;
    以产生该差值最小的训练目标像素的融合权值作为与该位置的标准目标像素对应的标准融合权值;
    形成分别对应各个标准目标像素的多条训练数据,每一训练数据包括一标准目标像素的对应的标准融合权值及与该标准目标像素对应的训练像素的边缘信息。
  10. 如权利要求8所述的图像放大装置,其中,所述权值产生单元还用于将所述目标图像划分为多个区域,并计算每一区域中的各个目标像素的融合权值的均值,并以该均值作为该区域中的各个目标像素的融合权值。
PCT/CN2019/085764 2019-03-12 2019-05-07 图像放大方法及图像放大装置 WO2020181641A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910185936.9 2019-03-12
CN201910185936.9A CN109978766B (zh) 2019-03-12 2019-03-12 图像放大方法及图像放大装置

Publications (1)

Publication Number Publication Date
WO2020181641A1 true WO2020181641A1 (zh) 2020-09-17

Family

ID=67078601

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2019/085764 WO2020181641A1 (zh) 2019-03-12 2019-05-07 图像放大方法及图像放大装置

Country Status (2)

Country Link
CN (1) CN109978766B (zh)
WO (1) WO2020181641A1 (zh)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112381714A (zh) * 2020-10-30 2021-02-19 南阳柯丽尔科技有限公司 图像处理的方法、装置、存储介质及设备

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166334A1 (en) * 2008-12-29 2010-07-01 Arcsoft Hangzhou Co., Ltd. Method for magnifying images and videos
CN106204454A (zh) * 2016-01-26 2016-12-07 西北工业大学 基于纹理边缘自适应数据融合的高精度快速图像插值方法
CN106709875A (zh) * 2016-12-30 2017-05-24 北京工业大学 一种基于联合深度网络的压缩低分辨率图像复原方法

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR100548206B1 (ko) * 2003-10-08 2006-02-02 삼성전자주식회사 디지털 영상 처리장치 및 그의 영상처리방법
US8260087B2 (en) * 2007-01-22 2012-09-04 Sharp Laboratories Of America, Inc. Image upsampling technique
EP2237218A4 (en) * 2007-12-25 2016-03-23 Nec Corp Image Processing Device, Image Processing Method, Image Decomposition Device, Image Compilation Device, Image Transmission System, and Storage Media
CN102800069A (zh) * 2012-05-22 2012-11-28 湖南大学 一种融合软决策自适应插值与双三次插值的图像超分辨率方法
CN102842111B (zh) * 2012-07-09 2015-03-18 许丹 放大图像的补偿方法及装置
CN104299185A (zh) * 2014-09-26 2015-01-21 京东方科技集团股份有限公司 一种图像放大方法、图像放大装置及显示设备

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100166334A1 (en) * 2008-12-29 2010-07-01 Arcsoft Hangzhou Co., Ltd. Method for magnifying images and videos
CN106204454A (zh) * 2016-01-26 2016-12-07 西北工业大学 基于纹理边缘自适应数据融合的高精度快速图像插值方法
CN106709875A (zh) * 2016-12-30 2017-05-24 北京工业大学 一种基于联合深度网络的压缩低分辨率图像复原方法

Also Published As

Publication number Publication date
CN109978766A (zh) 2019-07-05
CN109978766B (zh) 2020-10-16

Similar Documents

Publication Publication Date Title
CN112565589B (zh) 一种拍照预览方法、装置、存储介质和电子设备
US11030715B2 (en) Image processing method and apparatus
EP3664016B1 (en) Image detection method and apparatus, and terminal
JP2006033656A (ja) ユーザインタフェース提供装置
US8693783B2 (en) Processing method for image interpolation
EP2847998B1 (en) Systems, methods, and computer program products for compound image demosaicing and warping
WO2018090450A1 (zh) 显示屏均匀性测试方法及显示屏均匀性测试系统
TWI384417B (zh) 影像處理方法及其裝置
WO2020181641A1 (zh) 图像放大方法及图像放大装置
CN109325909B (zh) 一种图像放大方法和图像放大装置
JP4868249B2 (ja) 映像信号処理装置
JP5042251B2 (ja) 画像処理装置および画像処理方法
US9928577B2 (en) Image correction apparatus and image correction method
JP2007151094A (ja) 画像の階調変換装置、プログラム、電子カメラ、およびその方法
JP5280940B2 (ja) 特定色検出回路
JP4689243B2 (ja) 画像処理装置、画像処理方法、及びデジタルカメラ
CN114915803B (zh) 即时影像的缩放装置及缩放方法
WO2018166084A1 (zh) 一种高尔夫球场图的图像处理方法、装置及设备
CN103034976B (zh) 影像内插的处理方法
CN110299123B (zh) 音频处理器的自适应分辨率显示方法
JP6937722B2 (ja) 画像処理装置および画像処理方法
JP2018182550A (ja) 画像処理装置
TWI813181B (zh) 影像處理電路與影像處理方法
WO2017185348A1 (zh) 显示内容的处理方法、装置和设备
JP2017097523A (ja) 画像処理装置、画像処理方法および画像処理プログラム

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19918882

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19918882

Country of ref document: EP

Kind code of ref document: A1