WO2019205603A1 - 图像模糊度检测方法、装置、计算机设备及可读存储介质 - Google Patents

图像模糊度检测方法、装置、计算机设备及可读存储介质 Download PDF

Info

Publication number
WO2019205603A1
WO2019205603A1 PCT/CN2018/116538 CN2018116538W WO2019205603A1 WO 2019205603 A1 WO2019205603 A1 WO 2019205603A1 CN 2018116538 W CN2018116538 W CN 2018116538W WO 2019205603 A1 WO2019205603 A1 WO 2019205603A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
detected
variance
original image
edge
Prior art date
Application number
PCT/CN2018/116538
Other languages
English (en)
French (fr)
Inventor
沈操
Original Assignee
北京大米科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京大米科技有限公司 filed Critical 北京大米科技有限公司
Publication of WO2019205603A1 publication Critical patent/WO2019205603A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20024Filtering details
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20172Image enhancement details
    • G06T2207/20192Edge enhancement; Edge preservation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • G06T2207/30201Face

Definitions

  • the present invention relates to the field of image detection technologies, and in particular, to an image blur detection method and apparatus, a computer device, and a readable storage medium.
  • some frames of the video are usually taken as the cover of the video, and the cover should be representative of the video.
  • the first aspect of the present invention provides an image blur detection method, including the following steps:
  • the image clarity of the detected image is detected based on the variance of the edge values.
  • the obtaining the edge value in the detected image to calculate the variance of the edge value comprises:
  • a variance of edge values in the detected image is calculated based on the mean.
  • the detecting the image clarity of the detected image according to the variance of the edge value comprises:
  • the step of acquiring the input detection image further includes a step of face recognition.
  • the step of recognizing the face comprises:
  • Face recognition is performed on the original image to output the detected image.
  • the performing face recognition on the original image to output the detected image comprises:
  • the original image is output as a detected image
  • the image area containing the original image of the original image is output as a detection image.
  • outputting the original image containing the image area of the human face as the detection image includes:
  • an image region containing the face in the original image is selected as a detection image for output;
  • an image portion of the original image having the largest area is selected as the detected image for output.
  • a second aspect of the present invention provides an image blur detection apparatus, including:
  • An acquisition module configured to acquire an input detection image
  • a calculation module configured to acquire an edge value in the detected image to calculate a variance of the edge value
  • a detecting module configured to detect an image clarity degree of the detected image according to a variance of the edge value.
  • a face recognition module is further included for acquiring an original image and performing face recognition on the original image to output a detection image.
  • a third aspect of the invention provides a computer device comprising a memory, a processor, and a computer program stored on the memory and operable on the processor, the processor implementing the method described above when the program is executed.
  • a fourth aspect of the invention provides a computer readable storage medium having stored therein instructions that, when executed on a computer, cause the computer to perform the method described above.
  • the technical solution of the present invention provides a detection function for detecting whether the detected image is clear in the process of selecting a video cover image.
  • the variance of the edge value is calculated as a metric value of the image sharpness, which can be effective.
  • the degree of clarity of the image is detected, the amount of calculation is saved, and the detection efficiency is improved.
  • an image portion containing a large face area is selected as a detection image for output, thereby being able to highlight the original image.
  • the face area avoids the detection image containing the face from being disturbed by other backgrounds.
  • FIG. 1 is a block diagram showing the steps of an image blur detection method provided by an embodiment of the present invention.
  • Figure 2 shows a histogram of the edges of a sharp image obtained after an experiment
  • Figure 3 is an enlarged view of Figure 2;
  • Figure 4 shows a histogram of the edges of the blurred image obtained after the experiment
  • Figure 5 is an enlarged view of Figure 4.
  • FIG. 6 is a structural block diagram of an image blur degree detecting apparatus according to an embodiment of the present invention.
  • FIG. 7 is a schematic structural diagram of a computer device according to an embodiment of the present invention.
  • an embodiment of the present invention provides an image blur detection method, including the following steps:
  • the image clarity of the detected image is detected based on the variance of the edge values.
  • the image blur detection method provided by the present invention can obtain the variance of the edge value of the detected image and calculate the variance of the edge value obtained as the metric value, thereby detecting the image clarity of the detected image according to the variance of the edge value.
  • the detection image can be quickly detected with a clear degree, and the user can select the detection image with high definition as the cover of the video, which saves the calculation amount and improves the detection efficiency.
  • the edge value of the image is calculated in order to extract the high frequency information of the image, that is, the detail information.
  • Calculating the variance of the edge value of the image is to obtain the distribution of the edge value. If the edge value is very rich, the edge value is from very small to large, and the edge value is widely distributed, and the variance of the edge value is very large. If the edge value is not rich, the edge values are very small (blurred image, uniform image, almost no edge value), rather than from small to large, the edge value distribution range is very narrow, the variance of the edge value is small.
  • the advantage of calculating the variance is that it avoids the influence of image size or image area size.
  • We are concerned with the variance of the edge that is, the richness of the edge and the distribution range of the edge value. It is independent of the size of the image or the size of the area. This ensures the reasonableness of comparison between images of different sizes or regions of different sizes.
  • the variance of the edge value and the edge value of the image we calculated is independent of the content of the image, which ensures the reasonableness of comparison between the detected images of different contents.
  • Fig. 2 is a histogram of the edge of the clear image after the experiment
  • Fig. 3 is an enlarged view of Fig. 2
  • Fig. 3 is the blur obtained after the experiment.
  • FIG. 4 is an enlarged view of FIG. 3
  • the unit of the y-axis in the histogram is the number of pixels in the image
  • the unit of the x-axis is the numerical value of the edge value
  • the edge value of the image The variance of the image corresponds to the spread width of the histogram. It can be concluded from Fig. 2 and Fig.
  • edge value of the clear image has a wide spread range, and the edge value is 0, and the edge value is large or small ( The edge values are positive and negative. Most of the edges in the image are rich, and the mean value of the image edge values of the sharp image is -0.0121, and the variance is 445.9466.
  • the edge values of the blurred image are concentrated in a very narrow area, the number of edge values is very large, the majority of the edge values in the image are 0, and the mean value of the image edge values of the blurred image
  • the ratio is -0.0025 and the variance is 32.8890.
  • the image has rich edges, which have strong edges, relatively weak edges, and even flat regions without edges. And this kind of richness is realized as a large variance, and the range of data distribution is relatively wide; if it is a blurred image, there are almost no edges in the image, and all are flat areas, and the edges are almost all 0, which is expressed as an edge.
  • the variance is small and the data is concentrated in a narrow range.
  • the step of acquiring the edge value in the detected image to calculate the variance of the edge value includes:
  • S1 Perform filtering processing on the detected image to obtain an edge value of each pixel in the acquired detected image.
  • the filter adopted in the present invention is a high-pass filter, and its function is to extract high-frequency components in the detected image, that is, the edge values we need, and the formula is as follows:
  • P[i][j] is the pixel value of the pixel of the i-th row and the j-th column in the detected image
  • Ap is the edge value of the P-th pixel in the detected image
  • the edge value in the x direction and the edge value in the y direction in the detected image may be separately calculated by performing the edge value in the x direction and the edge value in the y direction in the detected image. Adding the edge value of the pixel in the detected image, so that it is necessary to extract the edge of a specific direction, the filter formula is as follows:
  • dx is the filter formula in the x direction and dy is the filter formula in the y direction.
  • dx and dy are also high-pass filters, dx is only calculating the edge value of the x direction in the detected image, and dy is only calculating the edge value in the y direction of the detected image.
  • the mean value of the sum of the edge values of each pixel in the detected image is calculated by the following formula:
  • N is the number of pixels
  • is the mean of the sum of the edge values of each pixel.
  • the variance of the edge value in the detected image is calculated by the following formula:
  • V is the variance
  • detecting the image clarity of the detected image according to the variance of the edge values includes:
  • the variance of the edge values is taken as a measure of sharpness
  • the detected image is detected as blurred.
  • the variance can be used as a metric for discriminating the sharpness, and the user can preset a sharpness threshold according to his own requirements, and by calculating the calculated variance and sharpness threshold. For comparison, when the variance of the edge values is greater than the threshold, the detected image is detected as clear, and when the variance of the edge values is less than the threshold, the detected image is detected as blurred.
  • the image blur detection method further includes a step of face recognition, and the step of the face recognition includes:
  • Face recognition is performed on the original image to output a detection image.
  • face recognition mainly uses a common face detection technology, which is generally based on a deep learning method, or a method based on HOG (histogram oriental gradient) or harr feature to find a face.
  • a region which is specifically represented by a rectangular box.
  • performing face recognition on the original image to output the detected image includes:
  • the original image is output as the detected image
  • the image area containing the original image of the original image is output as a detection image.
  • the original image is monitored by the face recognition technology.
  • the original image is output as the detected image, and when the face is detected in the original image, the original image is included.
  • the image area of the face is output as a detection image, that is, in face recognition, the face in the image is identified by a rectangular frame, and when the original image contains a face, the identification is selected in the original image.
  • the rectangular frame is output as a detected image, so that the detected image containing the face can be prevented from receiving interference from other backgrounds in the detected image when it is used as a video cover.
  • outputting the original image containing the image area of the human face as the detected image includes:
  • the image area containing the face in the original image is selected as the detection image for output;
  • the image region of the original image having the largest area is selected as the detected image for output.
  • the face area in the original image is highlighted, so that when the video cover is used, the face can be highlighted as a focus.
  • an image blur detection apparatus including:
  • An acquisition module configured to acquire an input detection image
  • a calculation module configured to acquire an edge value in the detected image to calculate a variance of the edge value
  • the detecting module is configured to detect the image clarity of the detected image according to the variance of the edge values.
  • a face recognition module is further included, configured to acquire an original image and perform face recognition on the original image to output the detection image.
  • Still another embodiment of the present invention provides a computer device including a memory, a processor, and a computer program stored on the memory and operable on the processor, the image blur detection method being implemented when the processor executes the program.
  • a computer system suitable for implementing the server provided by the embodiment includes a central processing unit (CPU) which can be loaded into a random program according to a program stored in a read only memory (ROM) or from a storage portion.
  • ROM read only memory
  • RAM random program stored in a read only memory
  • RAM various programs and data required for the operation of the computer system are also stored.
  • the CPU, ROM, and RAM are connected by this bus.
  • the input/output (I/O) interface is also connected to the bus.
  • the following components are connected to the I/O interface: an input portion including a keyboard, a mouse, and the like; an output portion including a liquid crystal display (LCD) or the like, a speaker, etc.; a storage portion including a hard disk or the like; and a network including a LAN card, a modem, and the like
  • the communication part of the interface card performs communication processing via a network such as the Internet.
  • the drive is also connected to the I/O interface as needed.
  • a removable medium such as a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor memory or the like is mounted on the drive as needed so that a computer program read therefrom is installed into the storage portion as needed.
  • the process described above in the flowcharts can be implemented as a computer software program.
  • the present embodiment includes a computer program product comprising a computer program tangibly embodied on a computer readable medium, the computer program comprising program code for executing the method illustrated in the flowchart.
  • the computer program can be downloaded and installed from the network via a communication portion, and/or installed from a removable medium.
  • each block in the flowchart or diagram may represent a module, a program segment, or a portion of code that includes one or more of the Execute the instruction.
  • the functions noted in the blocks may also occur in a different order than that illustrated in the drawings. For example, two successively represented blocks may in fact be executed substantially in parallel, and they may sometimes be executed in the reverse order, depending upon the functionality involved.
  • each block of the schematic and/or flow diagrams, as well as combinations of blocks in the schematic and/or flowcharts, can be implemented in a dedicated hardware-based system that performs the specified functions or operations. Or it can be implemented by a combination of dedicated hardware and computer instructions.
  • the units described in this embodiment may be implemented by software or by hardware.
  • the described unit may also be disposed in the processor, for example, as a processor including an acquisition module, a calculation module, a detection module, and the like.
  • the names of these units do not in any way constitute a limitation on the unit itself.
  • the computing module can also be described as a "sharpness value module.”
  • the present application further provides a computer readable storage medium, which may be a computer readable storage medium included in the apparatus described in the foregoing embodiment, or may exist separately, not A computer readable storage medium that is assembled into a terminal.
  • the computer readable storage medium stores one or more programs that are used by one or more processors to perform the image blur detection method described in the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Quality & Reliability (AREA)
  • Image Analysis (AREA)
  • Studio Devices (AREA)

Abstract

一种图像模糊度检测方法、装置、计算机设备及可读存储介质,所述方法包括以下步骤:获取输入的检测图像;获取所述检测图像中的边缘值来计算所述边缘值的方差;根据所述边缘值的方差来检测所述检测图像的图像清晰程度。上述方法实现了对图像的清晰度检测。

Description

图像模糊度检测方法、装置、计算机设备及可读存储介质 技术领域
本发明涉及图像检测技术领域,特别是涉及一种图像模糊度检测方法、装置、计算机设备及可读存储介质。
背景技术
在学生的课堂录像视频中,一般会通过截取视频的某些帧来作为视频的封面,并且封面应该对这个视频具有代表性。现有的技术中,一般首选会截取学生课堂视频中的若干帧来获得图像,然后检测图像中是否含有人脸,如果没有人脸,则跳过,如果有人脸,则确定该图像作为视频封面,但是并没有对图像进行清晰程度的检测,从而可能会导致选取的图像画质不清晰。
发明内容
对于上述技术问题,本发明第一方面提出一种图像模糊度检测方法,包括以下步骤:
获取输入的检测图像;
获取所述检测图像中的边缘值来计算所述边缘值的方差;
根据所述边缘值的方差来检测所述检测图像的图像清晰程度。
优选地,所述获取所述检测图像中的边缘值来计算所述边缘值的方差包括:
对所述检测图像进行滤波处理得到所述检测图像中的每个像素点的边缘值;
计算所述检测图像中的每个像素点的边缘值之和的均值;
根据所述均值计算所述检测图像中的边缘值的方差。
优选地,所述根据所述边缘值的方差来检测所述检测图像的图像清晰程度包括:
将所述边缘值的方差作为清晰度的度量值;
设置清晰度阈值;
将所述边缘值的方差与所述阈值进行比较;其中,
当所述边缘值的方差大于所述阈值时,则检测所述检测图像为清晰;
当所述边缘值的方差小于所述阈值时,则检测所述检测图像为模糊。。
优选地,所述获取输入的检测图像之前还包括有人脸识别的步骤。
优选地,所述人脸识别的步骤包括:
获取原始图像;
对原始图像进行人脸识别来输出所述检测图像。
优选地,所述对原始图像进行人脸识别来输出所述检测图像包括:
当检测到所述原始图像中没有人脸时,将所述原始图像作为检测图像进行输出;
当检测到所述原始图像中含有人脸时,将所述原始图像含有人脸的图像区域作为检测图像进行输出。
优选地,所述当检测到所述原始图像中含有人脸时,将所述原始图像含有人脸的图像区域作为检测图像进行输出包括:
当检测到所述原始图像中有且仅有一个人脸时,选取所述原始图像中含有人脸的图像区域作为检测图像进行输出;
当检测到所述原始图像中含有多个人脸时,选取所述原始图像中含有面积最大的一个人脸的图像部分作为检测图像进行输出。
本发明第二方面提出一种图像模糊度检测装置,包括:
获取模块,用于获取输入的检测图像;
计算模块,用于获取所述检测图像中的边缘值来计算所述边缘值的方差;
检测模块,用于根据所述边缘值的方差来检测所述检测图像的图像清晰程度。
优选地,还包括人脸识别模块,用于获取原始图像并对所述原始图像进行人脸识别来输出检测图像。
本发明第三方面提出一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,所述处理器执行所述程序时实现上述的方法。
本发明第四方面提出一种计算机可读存储介质,所述计算机可读存储介质中存储有指令,当所述计算机可读存储介质在计算机上运行时,使得所述计算机执行上述的方法。
本发明的有益效果如下:
本发明所述技术方案提供了在选取视频封面图像的过程中检测检测图像是否清晰的检测功能,通过获取检测图像的边缘值,计算边缘值的方差来作为图像清晰度的度量值,能够有效的检测图像的清晰程度,节省计算量,提高了检测效率,在对原始图像进行人脸识别的过程中,会选取含有人脸面积较大的图像部分作为检测图像进行输出,从而能够突出原始图像中的人脸区域,避免含有人脸的检测图像受到其他背景的干扰。
附图说明
下面结合附图对本发明的具体实施方式作进一步详细的说明。
图1示出本发明中的一个实施例提供的图像模糊度检测方法的步骤框图;
图2示出经过实验后得出的清晰图像的边缘的直方图;
图3为图2的放大图;
图4示出经过实验后得出的模糊图像的边缘的直方图;
图5为图4的放大图;
图6示出本发明中的一个实施例提供的图像模糊度检测装置的结构框图;
图7示出本发明中的一个实施例提供的计算机设备的结构示意图。
具体实施方式
为了更清楚地说明本发明,下面结合优选实施例和附图对本发明做进一步的说明。附图中相似的部件以相同的附图标记进行表示。本领域技术人员应当理解,下面所具体描述的内容是说明性的而非限制性的,不应以此限制本发明的保护范围。
如图1所示,本发明的一个实施例提供了一种图像模糊度检测方法,包括以下步骤:
获取输入的检测图像;
获取所述检测图像中的边缘值来计算所述边缘值的方差;
根据所述边缘值的方差来检测所述检测图像的图像清晰程度。
本发明所提供的一种图像模糊度检测方法,可通过获取检测图像的边缘值并计算其得到的边缘值的方差来作为度量值,从而根据边缘值的方差来检测检测图像的图像清晰程度,这样能够快速的对检测图像进行清晰程度的检测,方便用户选取清晰度高的检测图像作为视频的封面,节省了计算量,提高了检测效率。
计算图像的边缘值,是为了提取图像的高频信息,即细节信息,这样的提 取方法,避免了不同图像的整体亮度变化的影响,图像的整体的亮度变化不会影响图像的边缘值。比如,一个低亮度图像的相邻两个像素值是10,20,则边缘值是20-10=10,另一个高亮度图像的相邻两个像素值是210,220,其边缘值是220-110=10;这样就保证了不同的亮度水平的图像之间进行比较的合理性;
计算图像的边缘值的方差,是为了统计边缘值来得到分布情况,如果边缘值非常丰富,边缘值从很小到很大,都有,边缘值的分布非常广泛,则边缘值的方差很大;如果边缘值不丰富,边缘值都很小(模糊的图像,均一的图像,几乎没有边缘值),而不是从小到大都有,边缘值的分布范围非常狭窄,则边缘值的方差很小。
计算方差的好处是,避免了图像大小或图像区域大小的影响,我们关注的是边缘的方差,也就是边缘的丰富程度,边缘值的分布范围。而跟图像的大小或区域的大小是无关的。这就保证了不同的尺寸大小的图像或者不同尺寸大小的区域之间进行比较的合理性。
更进一步,我们计算的图像的边缘值和边缘值的方差,跟图像的内容无关,这就保证了不同内容的检测图像之间进行比较的合理性。
所以,对于不同整体亮度的图像,不同尺寸大小,不同内容的图像,我们可以用这个边缘值的方差来作为图像清晰度(模糊度)的指标。
如图2、图3、图4以及图5所示,图2为经过实验后得出清晰图像的边缘的直方图,图3为图2的放大图;图3为经过实验后得出的模糊图像的边缘的直方图,图4为图3的放大图,在直方图中y轴的单位为图像中的像素点的个数,x轴的单位为边缘值的数值大小,而图像的边缘值的方差对应于直方图的展布宽度,从图2与图3中可以得出,清晰图像的边缘值的展布范围很宽,既有边缘值为0,也有边缘值很大或很小(边缘值有正有负),图像中大部分边缘值丰富,并且清晰图像的图像边缘值的均值为-0.0121,方差为445.9466。
从图4与图5可以看出,模糊图像的边缘值集中在非常狭窄的区域,边缘值为0的个数非常多,图像中大部分边缘值为0,并且模糊图像的图像边缘值的均值为-0.0025,方差为32.8890。
结合图2以及图4或结合图3以及图5,可以看出如果是清晰图像,则图像中有丰富的边缘,既有很强的边缘,也有比较微弱的边缘,甚至也有平坦的区域没有边缘,而这种丰富性,就变现为方差比较大,数据分布的范围比较宽;如果是模糊图像,则图像中几乎没有边缘,都是平坦的区域,边缘几乎都是0,这就表现为边缘的方差很小,数据都集中在很窄的范围内。
在本实施例的一些可选的实施方式中,步骤获取检测图像中的边缘值来计算边缘值的方差包括:
S1、对检测图像进行滤波处理得到获取检测图像中的每个像素点的边缘值。
在本发明中所采取的的滤波器为高通滤波器,其作用是提取检测图像中的高频成分,也就是我们需要的边缘值,其公式如下:
Figure PCTCN2018116538-appb-000001
其中,a∈[0,1],假设我们要求取检测图像中的第P个像素点的边缘值,则将第P个像素点的坐标带入到上述的滤波器公式中从而得到:
Figure PCTCN2018116538-appb-000002
其中,P[i][j]为检测图像中第i行第j列的像素点的像素值,Ap为检测图像中第P个像素点的边缘值,通过这样的方式便能将检测图像中的每个像素点的边缘值进行依次的求取。
在本实施例的一个优选的实施方式中,还可以通过分别计算检测图像中x方向的边缘值以及y方向的边缘值,通过将检测图像中x方向的边缘值以及y方向的边缘值进行相加来得到检测图像中的像素点的边缘值,这样能够需要提取特定方向的边缘,其滤波器公式如下:
Figure PCTCN2018116538-appb-000003
其中,dx为x方向的滤波器公式,dy为y方向的滤波器公式。
需要说明的是,本实施例中dx与dy也是采用高通滤波器,dx是只计算检测图像中x方向的边缘值,dy是只计算检测图像中y方向的边缘值。
S2、计算检测图像中的每个像素点的边缘值之和的均值。
在上述S1的步骤中,求取出检测图像中的每个像素点的边缘值以后,通过下式来计算检测图像中的每个像素点的边缘值之和的均值:
Figure PCTCN2018116538-appb-000004
其中,N为像素点个数,μ为每个像素点的边缘值之和的均值。
S3、根据均值计算检测图像中的边缘值的方差。
在上述S2的步骤中,求取出检测图像中的每个像素点的边缘值之和的均值以后,通过下式来计算检测图像中的边缘值的方差:
Figure PCTCN2018116538-appb-000005
其中,V为方差。
在本实施例的一些可选的实施方式中,根据边缘值的方差来检测检测图像的图像清晰程度包括:
将边缘值的方差作为清晰度的度量值;
设置清晰度阈值;
将边缘值的方差与阈值进行比较;其中,
当边缘值的方差大于所述阈值时,则检测检测图像为清晰;
当边缘值的方差小于所述阈值时,则检测检测图像为模糊。
通过上述步骤得到检测图像的边缘值的方差以后,可以将方差作为判别清晰度的度量值,用户可根据自己的要求来预设一个清晰度阈值,并通过将计算得出的方差与清晰度阈值进行比较,当边缘值的方差大于阈值时,则检测检测图像为清晰,而当边缘值的方差小于所述阈值时,则检测检测图像为模糊。
在本实施例的一些可选的实施方式中,图像模糊度检测方法还包括人脸识别的步骤,人脸识别的步骤包括:
获取原始图像;
对原始图像进行人脸识别来输出检测图像。
在本实施例的具体实施中,人脸识别主要应用通用的人脸检测技术,一般是基于深度学习的方法,或者是基于HOG(histogram oriental gradient)特征或harr特征的方法,找出人脸的区域,其具体以一个矩形框表示。
在本实施例的具体实施中,对原始图像进行人脸识别来输出检测图像包括:
当检测到原始图像中没有人脸时,将原始图像作为检测图像进行输出;
当检测到原始图像中含有人脸时,将原始图像含有人脸的图像区域作为检测图像进行输出。
通过人脸识别技术来对原始图像进行监测,当原始图像中没有检测到人脸时,会将原始图像作为检测图像进行输出,而当原始图像中检测到人脸时,会将原始图像中含有人脸的图像区域作为检测图像进行输出,也就是说,在人脸识别中,图像中的人脸会以矩形框进行标识,当原始图像中含有人脸时,则会选择标识在原始图像中的矩形框作为检测图像进行输出,这样能够避免含有人脸的检测图像在作为视频封面时收到检测图像中其他背景的干扰。
在本实施例的一个优选的实施方式中,当检测到原始图像中含有人脸时,将原始图像含有人脸的图像区域作为检测图像进行输出包括:
当检测到原始图像中有且仅有一个人脸时,选取原始图像中含有人脸的图像区域作为检测图像进行输出;
当检测到原始图像中含有多个人脸时,选取原始图像中含有面积最大的一个人脸的图像区域作为检测图像进行输出。
通过人脸识别,能够得到每个人脸的一个矩形框,并且人脸的面积=矩形框的宽度*矩形框的高度,所以当原始图像中仅含有一个人脸时,会将含有人脸的图像区域作为检测图像进行输出,而当原始图像中含有多个人脸时,会通过对比多个人脸的矩形框的面积,来选取含有面积最大的一个人脸的图像区域作为检测图像进行输出,这样能够突出原始图像中的人脸区域,从而在作为视频封面时,能够突出人脸作为重点。
如图,6所示,本发明的另一个实施例提供了一种图像模糊度检测装置,包括:
获取模块,用于获取输入的检测图像;
计算模块,用于获取检测图像中的边缘值来计算边缘值的方差;
检测模块,用于根据边缘值的方差来检测检测图像的图像清晰程度。
在本实施例的一个可选的实施方式中,还包括了人脸识别模块,用于获取原始图像并对原始图像进行人脸识别来输出检测图像。
本发明的再一个实施例提供了一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,处理器执行程序时实现上述图像模糊度检测方法。如图7所示,适于用来实现本实施例提供的服务器的计算机系统,包括中央处理单元(CPU),其可以根据存储在只读存储器(ROM)中的程序或者从存储部分加载到随机访问存储器(RAM)中的程序而执行各种适当的动作和处理。在RAM中,还存储有计算机系统操作所需的各种程序和数据。CPU、ROM以及RAM通过总线被此相连。输入/输入(I/O)接 口也连接至总线。
以下部件连接至I/O接口:包括键盘、鼠标等的输入部分;包括诸如液晶显示器(LCD)等以及扬声器等的输出部分;包括硬盘等的存储部分;以及包括诸如LAN卡、调制解调器等的网络接口卡的通信部分。通信部分经由诸如因特网的网络执行通信处理。驱动器也根据需要连接至I/O接口。可拆卸介质,诸如磁盘、光盘、磁光盘、半导体存储器等等,根据需要安装在驱动器上,以便于从其上读出的计算机程序根据需要被安装入存储部分。
特别地,提据本实施例,上文流程图描述的过程可以被实现为计算机软件程序。例如,本实施例包括一种计算机程序产品,其包括有形地包含在计算机可读介质上的计算机程序,上述计算机程序包含用于执行流程图所示的方法的程序代码。在这样的实施例中,该计算机程序可以通过通信部分从网络上被下载和安装,和/或从可拆卸介质被安装。
附图中的流程图和示意图,图示了本实施例的系统、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或示意图中的每个方框可以代表一个模块、程序段或代码的一部分,上述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个接连地表示的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,示意图和/或流程图中的每个方框、以及示意和/或流程图中的方框的组合,可以用执行规定的功能或操作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
描述于本实施例中所涉及到的单元可以通过软件的方式实现,也可以通过硬件的方式来实现。所描述的单元也可以设置在处理器中,例如,可以描述为:一种处理器包括获取模块、计算模块、检测模块等。其中,这些单元的名称在某种情况下并不构成对该单元本身的限定。例如,计算模块还可以被描述为“清晰度值模块”。
作为另一方面,本申请还提供了一种计算机可读存储介质,该计算机可读存储介质可以是上述实施例中所述装置中所包含的计算机可读存储介质;也可以是单独存在,未装配入终端中的计算机可读存储介质。所述计算机可读存储介质存储有一个或者一个以上程序,所述程序被一个或者一个以上的处理器用来执行描述于本发明的图像模糊度检测方法。
显然,本发明的上述实施例仅仅是为清楚地说明本发明所作的举例,而并非是对本发明的实施方式的限定,对于所属领域的普通技术人员来说,在上述说明的基础上还可以做出其它不同形式的变化或变动,这里无法对所有的实施方式予以穷举,凡是属于本发明的技术方案所引伸出的显而易见的变化或变动仍处于本发明的保护范围之列。

Claims (11)

  1. 一种图像模糊度检测方法,其特征在于,包括以下步骤:
    获取输入的检测图像;
    获取所述检测图像中的边缘值来计算所述边缘值的方差;
    根据所述边缘值的方差来检测所述检测图像的图像清晰程度。
  2. 根据权利要求1所述的方法,其特征在于,所述获取所述检测图像中的边缘值来计算所述边缘值的方差包括:
    对所述检测图像进行滤波处理得到所述检测图像中的每个像素点的边缘值;
    计算所述检测图像中的每个像素点的边缘值之和的均值;
    根据所述均值计算所述检测图像中的边缘值的方差。
  3. 根据权利要求1所述的方法,其特征在于,所述根据所述边缘值的方差来检测所述检测图像的图像清晰程度包括:
    将所述边缘值的方差作为清晰度的度量值;
    设置清晰度阈值;
    将所述边缘值的方差与所述阈值进行比较;其中,
    当所述边缘值的方差大于所述阈值时,则检测所述检测图像为清晰;
    当所述边缘值的方差小于所述阈值时,则检测所述检测图像为模糊。
  4. 根据权利要求1所述的方法,其特征在于,所述获取输入的检测图像之前还包括有人脸识别的步骤。
  5. 根据权利要求4所述的方法,其特征在于,所述人脸识别的步骤包括:
    获取原始图像;
    对原始图像进行人脸识别来输出所述检测图像。
  6. 根据权利要求5所述的方法,其特征在于,所述对原始图像进行人脸识别来输出所述检测图像包括:
    当检测到所述原始图像中没有人脸时,将所述原始图像作为检测图像进行输出;
    当检测到所述原始图像中含有人脸时,将所述原始图像含有人脸的图像区域作为检测图像进行输出。
  7. 根据权利要求6所述的方法,其特征在于,所述当检测到所述原始图像中含有人脸时,将所述原始图像含有人脸的图像区域作为检测图像进行输出 包括:
    当检测到所述原始图像中有且仅有一个人脸时,选取所述原始图像中含有人脸的图像区域作为检测图像进行输出;
    当检测到所述原始图像中含有多个人脸时,选取所述原始图像中含有面积最大的一个人脸的图像区域作为检测图像进行输出。
  8. 一种图像模糊度检测装置,其特征在于,包括:
    获取模块,用于获取输入的检测图像;
    计算模块,用于获取所述检测图像中的边缘值并计算所述边缘值的方差;
    检测模块,用于根据所述边缘值的方差来检测所述检测图像的图像清晰程度。
  9. 根据权利要求8所述的图像模糊度监测装置,其特征在于,还包括人脸识别模块,用于获取原始图像并对所述原始图像进行人脸识别来输出检测图像。
  10. 一种计算机设备,包括存储器、处理器及存储在存储器上并可在处理器上运行的计算机程序,其特征在于,所述处理器执行所述程序时实现如权利要求1-7中任一项所述的方法。
  11. 一种计算机可读存储介质,其特征在于,所述计算机可读存储介质中存储有指令,当所述计算机可读存储介质在计算机上运行时,使得所述计算机执行权利要求1-7中任一项所述的方法。
PCT/CN2018/116538 2018-04-26 2018-11-20 图像模糊度检测方法、装置、计算机设备及可读存储介质 WO2019205603A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201810387192.4 2018-04-26
CN201810387192.4A CN108629766A (zh) 2018-04-26 2018-04-26 图像模糊度检测方法、装置、计算机设备及可读存储介质

Publications (1)

Publication Number Publication Date
WO2019205603A1 true WO2019205603A1 (zh) 2019-10-31

Family

ID=63694664

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2018/116538 WO2019205603A1 (zh) 2018-04-26 2018-11-20 图像模糊度检测方法、装置、计算机设备及可读存储介质

Country Status (2)

Country Link
CN (1) CN108629766A (zh)
WO (1) WO2019205603A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108629766A (zh) * 2018-04-26 2018-10-09 北京大米科技有限公司 图像模糊度检测方法、装置、计算机设备及可读存储介质
CN110379118A (zh) * 2019-07-26 2019-10-25 中车青岛四方车辆研究所有限公司 列车车下防火智能监控系统及方法
CN111507283B (zh) * 2020-04-21 2021-11-30 浙江蓝鸽科技有限公司 基于课堂场景的学生行为识别方法及系统

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046406A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Audio and video thumbnails
CN104598921A (zh) * 2014-12-31 2015-05-06 乐视网信息技术(北京)股份有限公司 视频预览图的选取方法及选取装置
CN105787869A (zh) * 2016-02-18 2016-07-20 王萌 一种人脸识别截图方法
CN108629766A (zh) * 2018-04-26 2018-10-09 北京大米科技有限公司 图像模糊度检测方法、装置、计算机设备及可读存储介质

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101364257B (zh) * 2007-08-09 2011-09-21 上海银晨智能识别科技有限公司 能识别图像来源的人脸识别方法
CN103093419B (zh) * 2011-10-28 2016-03-02 浙江大华技术股份有限公司 一种检测图像清晰度的方法及装置
CN103455994A (zh) * 2012-05-28 2013-12-18 佳能株式会社 图像模糊度的确定方法和设备
CN103345728B (zh) * 2013-06-27 2016-01-27 宁波大学 一种显微图像的清晰度获取方法
CN104268888B (zh) * 2014-10-09 2017-11-03 厦门美图之家科技有限公司 一种图像模糊检测方法
CN104867128B (zh) * 2015-04-10 2017-10-31 浙江宇视科技有限公司 图像模糊检测方法和装置
CN106101697B (zh) * 2016-06-21 2017-12-05 深圳市辰卓科技有限公司 图像清晰度检测方法、装置及测试设备
CN106296665B (zh) * 2016-07-29 2019-05-14 北京小米移动软件有限公司 卡片图像模糊检测方法和装置
CN106485702B (zh) * 2016-09-30 2019-11-05 杭州电子科技大学 基于自然图像特征统计的图像模糊检测方法

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080046406A1 (en) * 2006-08-15 2008-02-21 Microsoft Corporation Audio and video thumbnails
CN104598921A (zh) * 2014-12-31 2015-05-06 乐视网信息技术(北京)股份有限公司 视频预览图的选取方法及选取装置
CN105787869A (zh) * 2016-02-18 2016-07-20 王萌 一种人脸识别截图方法
CN108629766A (zh) * 2018-04-26 2018-10-09 北京大米科技有限公司 图像模糊度检测方法、装置、计算机设备及可读存储介质

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
LIU, . PU ET AL.: "Auto-focusing Method Based on Octahedral Gradient Variance for Tracking System", VIDEO ENGINEERING, vol. 37, no. 9, 30 September 2013 (2013-09-30), pages 183 - 191, XP055649479, ISSN: 1002-8692 *

Also Published As

Publication number Publication date
CN108629766A (zh) 2018-10-09

Similar Documents

Publication Publication Date Title
CN110163215B (zh) 图像处理方法、装置、计算机可读介质及电子设备
WO2019174130A1 (zh) 票据识别方法、服务器及计算机可读存储介质
Li et al. Finding the secret of image saliency in the frequency domain
US9014467B2 (en) Image processing method and image processing device
US20200349716A1 (en) Interactive image matting method, computer readable memory medium, and computer device
CN107507173A (zh) 一种全切片图像的无参考清晰度评估方法及系统
CN108510499B (zh) 一种基于模糊集和Otsu的图像阈值分割方法及装置
WO2019205603A1 (zh) 图像模糊度检测方法、装置、计算机设备及可读存储介质
CN111161222B (zh) 一种基于视觉显著性的印刷辊筒缺陷检测方法
WO2020253508A1 (zh) 异常细胞检测方法、装置及计算机可读存储介质
WO2022237397A1 (zh) 图像真伪检测方法、装置、计算机设备和存储介质
WO2020124873A1 (zh) 图像处理方法
CN111369523B (zh) 显微图像中细胞堆叠的检测方法、系统、设备及介质
CN110111347B (zh) 图像标志提取方法、装置及存储介质
CN111368717A (zh) 视线确定方法、装置、电子设备和计算机可读存储介质
US20180040115A1 (en) Methods and apparatuses for estimating an ambiguity of an image
CN109389569A (zh) 基于改进DehazeNet的监控视频实时去雾方法
Ma et al. Efficient saliency analysis based on wavelet transform and entropy theory
CN114926374B (zh) 一种基于ai的图像处理方法、装置、设备及可读存储介质
CN110473176B (zh) 图像处理方法及装置、眼底图像处理方法、电子设备
CN112149570A (zh) 多人活体检测方法、装置、电子设备及存储介质
US20210012509A1 (en) Image processing method and computer-readable recording medium having recorded thereon image processing program
US10657369B1 (en) Unsupervised removal of text from images using linear programming for optimal filter design
CN110874547B (zh) 从视频中识别对象的方法和设备
CN106611417B (zh) 将视觉元素分类为前景或背景的方法及装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 18916016

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15.02.2021)

122 Ep: pct application non-entry in european phase

Ref document number: 18916016

Country of ref document: EP

Kind code of ref document: A1