WO2017107395A1 - 图像处理方法及系统 - Google Patents

图像处理方法及系统 Download PDF

Info

Publication number
WO2017107395A1
WO2017107395A1 PCT/CN2016/084864 CN2016084864W WO2017107395A1 WO 2017107395 A1 WO2017107395 A1 WO 2017107395A1 CN 2016084864 W CN2016084864 W CN 2016084864W WO 2017107395 A1 WO2017107395 A1 WO 2017107395A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
matrix
pixel
value
points
Prior art date
Application number
PCT/CN2016/084864
Other languages
English (en)
French (fr)
Inventor
杨杰
高允沛
颜业钢
Original Assignee
深圳Tcl数字技术有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 深圳Tcl数字技术有限公司 filed Critical 深圳Tcl数字技术有限公司
Publication of WO2017107395A1 publication Critical patent/WO2017107395A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to the field of image processing technologies, and in particular, to an image processing method and system.
  • the existing image acquisition system generally collects images through image acquisition devices such as driving recorders, license plate scanners, and surveillance cameras. After the images are collected by the image acquisition device, the collected images need to be processed before being displayed.
  • the processing method of the image is relatively simple, for example, gray value processing is performed on the image to obtain a gray value image, and then the target area and the background area of the image are determined according to the gray value image, and gray The degree image only determines the target area and the background area of the image according to the color, but when the color difference between the target area and the background area in the image is small, the image is processed by the gray value to determine the target area and the background area, so that the image The processing is not accurate enough.
  • the main object of the present invention is to provide an image processing method and system, which aims to solve the technical problem that the image processing method is not accurate enough.
  • an image processing method provided by the present invention includes the following steps:
  • the image is binarized according to the average to determine a target area and a background area of the image.
  • the present invention also provides an image processing method, the image processing method comprising the following steps:
  • the image is binarized according to the average to determine a target area and a background area of the image.
  • the present invention further provides an image processing system, the image processing system comprising:
  • An acquisition module configured to acquire a pixel matrix of an image
  • a first calculating module configured to calculate a similarity matrix composed of similarities of any two pixel points in the image based on a pixel matrix of the image
  • a first processing module configured to process the similarity matrix to obtain a feature vector matrix of the image
  • a second calculating module configured to calculate an entropy matrix of the image according to the feature vector matrix
  • a third calculating module configured to calculate an average value of each entropy value in the entropy value matrix
  • a second processing module configured to perform binarization processing on the image according to the average value to determine a target area and a background area of the image.
  • the image processing method and system provided by the invention obtains an entropy matrix according to a feature vector matrix of an image, and then calculates an average value according to each entropy value in the entropy matrix, and finally the average value and each of the images Pixels are compared to determine a target area and a background area of the image, and the target area and the background area in the image are determined by image specific feature vectors such as texture features of the image and overall trend characteristics of the image.
  • the target area and the background area of the image are determined only according to the gray value of the image, so that the color difference between the target area and the background area in the image When you are not too big, the processing of images is more accurate.
  • FIG. 1 is a schematic flow chart of a first embodiment of an image processing method according to the present invention.
  • FIG. 2 is a schematic flow chart of a second embodiment of an image processing method according to the present invention.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of an image processing system according to the present invention.
  • FIG. 4 is a schematic diagram of functional modules of a second embodiment of an image processing system of the present invention.
  • the present invention provides an image processing method.
  • FIG. 1 is a schematic flowchart diagram of a first embodiment of an image processing method according to the present invention.
  • This embodiment provides an image processing method, where the image processing method includes:
  • Step S10 acquiring a pixel matrix of an image
  • the step of acquiring an image specifically: when the image is a road image during running of the vehicle, the acquired image may be acquired by a camera preset by the vehicle, and the camera may be a foreground camera or a panoramic camera; the image is a license plate image, and the acquired image can be acquired by a driving recorder, a license plate scanner, etc.; when the image is an indoor or outdoor monitoring image, the acquired image can be performed by a surveillance camera. Acquiring, further, the acquired image may also acquire a stored image.
  • the acquired image is analyzed to acquire respective pixel points of the image, and then a pixel matrix I of the image is generated according to each pixel point of the image, and the performance of the pixel matrix I is performed.
  • the form is: [I 1 I 2 I 3 ... I n ].
  • Step S20 Calculate a similarity matrix composed of similarities of any two pixel points in the image based on a pixel matrix of the image;
  • step S20 includes:
  • Method 1 After obtaining the pixel matrix of the image, first calculate a scale parameter ⁇ i of each pixel point in the pixel matrix, and the scale parameter ⁇ i is calculated by the following formula:
  • Id is the pixel value of the dth point in the pixel matrix I
  • the scale parameter ⁇ i of the i-th point in the pixel matrix I can be calculated. Since the number of n pixels in the pixel matrix I is calculated, n*n scale parameters can be calculated. After the scale parameter ⁇ i of each point, according to the pixel matrix I and the calculated scale parameters, the similarity matrix A of the image can be calculated, and the similarity of any two pixel points in the image is calculated.
  • the formula for the similarity matrix is:
  • a ij represents an arbitrary element of the similarity matrix A
  • ⁇ i , ⁇ j respectively represent scale parameters corresponding to any points Ii and Ij in the pixel matrix I
  • represents points Ii and Ij Euclidean distance
  • the similarity matrix A can be calculated as:
  • the first row represents the first point in the pixel matrix and all the points in the pixel matrix (the first point, the second point, the third point...the last point)
  • the similarity difference the second row represents the similarity difference between the second point in the pixel matrix and all the points in the pixel matrix (the first point, the second point, the third point...the last point)
  • the value, and so on, the last row represents the difference in similarity between the last point in the pixel matrix and all points in the pixel matrix (first point, second point, third point...the last point). It can be understood that in the similarity matrix, the values of the points A11, A22, A33, ..., Ann on the diagonal are zero.
  • Step S20 includes the following steps:
  • Step a converting a pixel matrix of the image into a gray matrix
  • Step b Calculate a similarity matrix composed of similarities of any two pixel points in the image according to the gray matrix.
  • a gray value is obtained for each pixel in the image
  • the three primary color values of each pixel, Gray R*0.3+G*0.59+B*0.11) and so on.
  • each pixel point in the pixel matrix is calculated according to the above algorithm, and gray values corresponding to the respective pixel points are obtained, and finally, the pixel matrix is converted into a one-dimensional gray matrix.
  • the scale parameter ⁇ i of each point in the data set is first calculated, and the scale parameter ⁇ i is calculated by the following formula:
  • xd is the gray value of the dth point in the data set X
  • the scale parameter ⁇ i of each point in the data set X can be calculated. If the data set includes n numbers, n*n scale parameters can be calculated, and after obtaining the scale parameter ⁇ i of each point According to the data set X and the calculated scale parameters, the similarity matrix A of the image can be calculated, and the formula for calculating the similarity matrix corresponding to the similarity of any two pixel points in the image is:
  • a ij represents an arbitrary element of the similarity matrix A
  • ⁇ i , ⁇ j respectively represent scale parameters corresponding to arbitrary points x i and x j in the data set
  • represents points x i and x j Euclidean distance
  • the similarity matrix A can be calculated as:
  • the first row represents the first point in the data set and The similarity difference between all points in the dataset (first point, second point, third point...last point)
  • the second line represents the second point in the dataset and all points in the dataset
  • the last line represents the last point in the dataset and all points in the dataset (the first point, The similarity difference between the second point, the third point...the last point).
  • each pixel point in the pixel matrix corresponds to an RGB value
  • the similarity matrix of the image is calculated according to the RGB value
  • the accuracy of the calculation result is affected, and the gray matrix is each pixel point.
  • a gray matrix corresponding to the matrix therefore, the pixel matrix of the image is first converted into a gray matrix, and then the similarity matrix of the image is calculated according to the gray matrix, instead of directly calculating the similarity according to the pixel matrix
  • the degree matrix which improves the accuracy of the computational similarity matrix.
  • Step S30 obtaining a feature vector matrix of the image according to the similarity matrix
  • step S30 includes the following steps:
  • Step c calculating a Laplacian matrix of the image based on the similarity matrix
  • a Laplacian matrix of the image is calculated according to the similarity matrix, and the Laplacian matrix L is calculated by the following formula:
  • D is the diagonal matrix and the diagonal matrix is calculated by the following formula:
  • D represents an arbitrary element on the diagonal line, that is to say, after calculating the similarity matrix, the diagonal matrix is first calculated, and then according to the similarity matrix and the diagonal matrix, the image pull can be calculated.
  • the Plass matrix, the Laplacian matrix is expressed as:
  • Step d performing feature decomposition on the Laplacian matrix to obtain a feature vector matrix of the image.
  • the eigenvector matrix [V i V 2 ... V n ] can be obtained by calculation.
  • Step S40 calculating an entropy matrix of the image according to the feature vector matrix
  • the entropy matrix corresponding to the eigenvector matrix is calculated, denoted as E, and V i is taken as an example, and the corresponding entropy matrix is E i , then:
  • E [E 1 , E 2 , ..., E i , ..., E n ], i ⁇ (1, n).
  • Step S50 calculating an average value of each entropy value in the entropy value matrix
  • the average value E mean can be obtained.
  • Step S60 performing binarization processing on the image according to the average value to determine a target area and a background area of the image.
  • the average value is used as a standard, and is compared with the pixel value of each pixel in the image, and when the pixel value of the pixel is greater than the average value, The pixel is a black pixel, and the area corresponding to the black pixel is the target area. Similarly, when the pixel value of the pixel is smaller than the average, the pixel is determined to be a white pixel, and the area corresponding to the white pixel is The background area, and finally, the target area and the background area of the image are determined.
  • the image processing method and the system proposed by the present invention provide an entropy matrix according to the feature vector matrix of the image, and then calculate an average value according to each entropy value in the entropy matrix, and finally The average value is compared with each pixel point in the image to determine a target area and a background area of the image, and the specific characteristic vector of the image, such as the texture feature of the image and the overall trend feature of the image, is determined in the image.
  • the target area and the background area not only the target area and the background area of the image are determined according to the gray value of the image, so that the image processing is more accurate when the color difference between the target area and the background area is small.
  • the image processing method is further include:
  • Step S70 performing enhanced interpolation processing on the target area, and performing sparse interpolation processing on the background area.
  • the target area and the background area of the image are respectively interpolated, that is, the target area of the image is subjected to enhanced interpolation processing, and the image is The background area is subjected to sparse interpolation processing to highlight the target area and the background area of the image.
  • the enhanced interpolation process is to increase a pixel value of all points of the target area by a preset value, and when the pixel value of the added point exceeds the upper limit pixel value, the pixel value of the point is recorded as the
  • the upper limit pixel value is a process of fixing pixel values of all points of the background area to a preset value.
  • Table 1 shows any nine-square grid of the target area in the image matrix.
  • the elements of the nine-square grid are increased by a pixel value of 10, and if it is increased by 255, it is recorded as 255, as shown in Table 2:
  • Table 2 shows the pixel values of the nine squares after strengthening in Table 1. It can be understood that the increase of the pixel value ranges from 10-15, and the specific increase depends on the situation.
  • Table 3 shows any of the nine-squares of the background area in the image matrix, and the elements of the nine-square grid are set to the nine-square grid center element 35, as shown in Table 4:
  • Table 4 shows the pixel values of the nine squares after the blur in Table 3. By analogy, all the background pixels in the image are so sparse. It can be understood that the central element 35 is not necessarily set as the standard, and the elements of the nine squares can be set to 34, as long as the elements of the nine squares are set to the same value.
  • the target area of the image is further subjected to enhanced interpolation processing, and the background area of the image is subjected to sparse interpolation processing to highlight the target area of the image. And the background area, so that the target area and the background area of the image are more distinct, which improves the accuracy of image processing.
  • the invention further provides an image processing system.
  • FIG. 3 is a schematic diagram of functional modules of a first embodiment of an image processing system according to the present invention.
  • the functional block diagram shown in FIG. 3 is merely an exemplary diagram of a preferred embodiment, and those skilled in the art can surround the functional modules of the image processing system shown in FIG. It is easy to add a new function module; the name of each function module is a custom name, which is only used to assist in understanding various program function blocks of the image processing system, and is not used to define the technical solution of the present invention, and the core of the technical solution of the present invention is , the function to be achieved by the function module of each name.
  • This embodiment provides an image processing system, and the image processing system includes:
  • An obtaining module 10 configured to acquire a pixel matrix of an image
  • the acquiring module 10 is configured to acquire an image, and the acquiring module is specifically configured to: when the image is a road image during running of the vehicle, the acquiring module 10 acquires an image that can be preset by the vehicle.
  • the camera is acquired, the camera may be a foreground camera or a panoramic camera; in the image is a license plate image, the acquisition module 10 acquires an image and can be acquired by a driving recorder, a license plate scanner, etc.; and the image is monitored indoors or outdoors.
  • the acquiring module 10 acquires an image and obtains the image through the monitoring camera. Further, the acquiring module 10 acquires the image and may also acquire the stored image.
  • the acquiring module 10 analyzes the acquired image to acquire each pixel of the image, and then generates a pixel matrix I of the image according to each pixel of the image.
  • the representation of the pixel matrix I is: [I 1 I 2 I 3 ... I n ].
  • a first calculating module 20 configured to calculate, according to a pixel matrix of the image, a similarity matrix composed of similarities of any two pixel points in the image;
  • the implementation manner of the similarity matrix composed by the first calculation module 20 for calculating the similarity of any two pixel points in the image includes:
  • Method 1 After obtaining the pixel matrix of the image, the first calculating module 20 first calculates a scale parameter ⁇ i of each pixel point in the pixel matrix, and the scale parameter ⁇ i is calculated by the following formula:
  • Id is the pixel value of the dth point in the pixel matrix I
  • the scale parameter ⁇ i of the i-th point in the pixel matrix I can be calculated. Since the number of n pixels in the pixel matrix I is calculated, n*n scale parameters can be calculated.
  • the first calculation module 20 can calculate the similarity matrix A of the image according to the pixel matrix I and the calculated scale parameters, and the first calculation module 20
  • the formula for calculating the similarity matrix corresponding to the similarity of any two pixel points in the image is:
  • a ij represents an arbitrary element of the similarity matrix A
  • ⁇ i , ⁇ j respectively represent scale parameters corresponding to any points Ii and Ij in the pixel matrix I
  • represents points Ii and Ij Euclidean distance
  • the similarity matrix A can be calculated as:
  • the first row represents the first point in the pixel matrix and all the points in the pixel matrix (the first point, the second point, the third point...the last point)
  • the similarity difference the second row represents the similarity difference between the second point in the pixel matrix and all the points in the pixel matrix (the first point, the second point, the third point...the last point)
  • the value, and so on, the last row represents the difference in similarity between the last point in the pixel matrix and all points in the pixel matrix (first point, second point, third point...the last point). It can be understood that in the similarity matrix, the values of the points A11, A22, A33, ..., Ann on the diagonal are zero.
  • the first calculation module 20 includes:
  • a conversion unit configured to convert a pixel matrix of the image into a gray matrix
  • a first calculating unit configured to calculate, according to the gray matrix, a similarity matrix composed of similarities of any two pixel points in the image.
  • a gray value is obtained for each pixel in the image
  • the three primary color values of each pixel, Gray R*0.3+G*0.59+B*0.11) and so on.
  • the first calculating unit calculates a similarity matrix of the image according to the gray matrix, wherein the similarity matrix includes a pixel similarity difference between any two points, and the gradation matrix calculation similarity matrix manner, to calculate the parameters of each data set point scale ⁇ i, the scale parameter ⁇ i is calculated by the following formula:
  • xd is the gray value of the dth point in the data set X
  • the scale parameter ⁇ i of each point in the data set X can be calculated. If the data set includes n numbers, n*n scale parameters can be calculated, and after obtaining the scale parameter ⁇ i of each point According to the data set X and the calculated scale parameters, the first calculating unit may calculate a similarity matrix A of the image, and the first calculating unit calculates any two pixel points in the image.
  • the formula for the similarity matrix corresponding to the similarity is:
  • a ij represents an arbitrary element of the similarity matrix A
  • ⁇ i , ⁇ j respectively represent scale parameters corresponding to arbitrary points x i and x j in the data set
  • represents points x i and x j Euclidean distance
  • the similarity matrix A can be calculated as:
  • the first row represents the first point in the data set and all points in the data set (the first point, the second point, the third point, ... the last point)
  • the similarity difference the second row represents the similarity difference between the second point in the dataset and all the points in the dataset (first point, second point, third point...last point)
  • the last row represents the similarity difference between the last point in the dataset and all the points in the dataset (first point, second point, third point...last point).
  • each pixel point in the pixel matrix corresponds to an RGB value
  • the similarity matrix of the image is calculated according to the RGB value
  • the accuracy of the calculation result is affected, and the gray matrix is each pixel point.
  • a gray matrix corresponding to the matrix therefore, the pixel matrix of the image is first converted into a gray matrix, and then the similarity matrix of the image is calculated according to the gray matrix, instead of directly calculating the similarity according to the pixel matrix
  • the degree matrix which improves the accuracy of the computational similarity matrix.
  • the first processing module 30 is configured to process the similarity matrix to obtain a feature vector matrix of the image
  • the first processing module 30 Specifically, the first processing module 30:
  • a second calculating unit configured to calculate a Laplacian matrix of the image based on the similarity matrix
  • the second calculating unit calculates a Laplacian matrix of the image according to the similarity matrix, and the Laplacian matrix L is calculated by the following formula:
  • D is the diagonal matrix and the diagonal matrix is calculated by the following formula:
  • D represents an arbitrary element on the diagonal line, that is to say, after calculating the similarity matrix, the second calculation unit first calculates a diagonal matrix, and then according to the similarity matrix and the diagonal matrix, ie A Laplacian matrix of the image can be calculated, the Laplacian matrix being expressed as:
  • a feature decomposition unit configured to perform feature decomposition on the Laplacian matrix to obtain a feature vector matrix of the image.
  • the eigenvector matrix [V i V 2 ... V n ] can be obtained by calculation.
  • a second calculating module 40 configured to calculate an entropy matrix of the image according to the feature vector matrix
  • the second calculating module 40 calculates an entropy matrix corresponding to the eigenvector matrix, denoted as E, and takes V i as an example to set a corresponding entropy matrix. For E i , then:
  • E [E 1 , E 2 , ..., E i , ..., E n ], i ⁇ (1, n).
  • a third calculating module 50 configured to calculate an average value of each entropy value in the entropy value matrix
  • the average value E mean can be obtained.
  • the second processing module 60 is configured to perform binarization processing on the image according to the average value to determine a target area and a background area of the image.
  • the second processing module 60 compares the average value with a pixel value of each pixel in the image, and the pixel value at the pixel is greater than In the average value, it is determined that the pixel point is a black pixel point, and the area corresponding to the black pixel point is the target area; similarly, when the pixel value of the pixel point is smaller than the average value, the pixel point is determined to be a white pixel point, and white The area corresponding to the pixel is the background area, and finally, the target area and the background area of the image are determined.
  • the image processing system and the system proposed by the present invention obtain an entropy matrix according to an image vector matrix of an image, and then calculate an average value according to each entropy value in the entropy matrix, and finally The average value is compared with each pixel point in the image to determine a target area and a background area of the image, and the specific characteristic vector of the image, such as the texture feature of the image and the overall trend feature of the image, is determined in the image.
  • the target area and the background area not only the target area and the background area of the image are determined according to the gray value of the image, so that the image processing is more accurate when the color difference between the target area and the background area is small.
  • the image processing system further includes:
  • the third processing module 70 is configured to perform enhanced interpolation processing on the target area, and perform sparse interpolation processing on the background area.
  • the third processing module 70 performs bidirectional interpolation on the target area and the background area of the image, that is, performing enhanced interpolation on the target area of the image. Processing and performing sparse interpolation on the background area of the image to Highlight the target area and background area of the image.
  • the enhanced interpolation process is to increase the pixel value of all points of the target area by a preset value, and when the pixel value of the added point exceeds the upper limit pixel value, the pixel value of the point is recorded as The upper limit pixel value;
  • the sparse interpolation process is to fix pixel values of all points of the background area to a preset value.
  • Table 1 shows any nine-square grid of the target area in the image matrix.
  • the elements of the nine-square grid are increased by a pixel value of 10, and if it is increased by 255, it is recorded as 255, as shown in Table 2:
  • Table 2 shows the pixel values of the nine squares after strengthening in Table 1. It can be understood that the increase of the pixel value ranges from 10-15, and the specific increase depends on the situation.
  • Table 3 shows any of the nine squares of the background area in the image matrix, the elements of the nine squares They are all set to the center element 35 of the nine-square grid, as shown in Table 4:
  • Table 4 shows the pixel values of the nine squares after the blur in Table 3. By analogy, all the background pixels in the image are so sparse. It can be understood that the central element 35 is not necessarily set as the standard, and the elements of the nine squares can be set to 34, as long as the elements of the nine squares are set to the same value.
  • the target area of the image is further subjected to enhanced interpolation processing, and the background area of the image is subjected to sparse interpolation processing to highlight the target area of the image. And the background area, so that the target area and the background area of the image are more distinct, which improves the accuracy of image processing.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

一种图像处理方法,获取图像的像素矩阵(S10);基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵(S20);根据所述相似度矩阵得到所述图像的特征向量矩阵(S30);根据所述特征向量矩阵计算所述图像的熵值矩阵(S40);计算所述熵值矩阵中的各个熵值的平均值(S50);根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域(S60),从而提高了图像处理的准确性。

Description

图像处理方法及系统 技术领域
本发明涉及图像处理技术领域,尤其涉及一种图像处理方法及系统。
背景技术
现有的图像采集系统,一般都是通过行车记录仪、车牌扫描仪、监控摄像头等图像采集设备对图像进行采集,在通过图像采集设备采集图像后,需要对采集的图像进行处理才能进行显示,而现在的图像采集设备,对图像的处理的方式都比较简单,例如,对图像进行灰度值处理,得到灰度值图像,然后根据灰度值图像确定图像的目标区域和背景区域,而灰度值图像仅仅是根据颜色确定图像的目标区域和背景区域,但是当图像中目标区域和背景区域颜色差别不大时,通过灰度值对图像进行处理以确定目标区域和背景区域,使得图像的处理不够准确。
发明内容
本发明的主要目的在于提出一种图像处理方法及系统,旨在解决传统的图像处理方式,对图像的处理不够准确的技术问题。
为实现上述目的,本发明提供的一种图像处理方法,所述图像处理方法包括以下步骤:
获取图像的像素矩阵;
将所述图像的像素矩阵转化为灰度矩阵;
根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵;
根据所述特征向量矩阵计算所述图像的熵值矩阵;
计算所述熵值矩阵中的各个熵值的平均值;
根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
此外,为实现上述目的,本发明还提出一种图像处理方法,所述图像处理方法包括以下步骤:
获取图像的像素矩阵;
基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
根据所述相似度矩阵得到所述图像的特征向量矩阵;
根据所述特征向量矩阵计算所述图像的熵值矩阵;
计算所述熵值矩阵中的各个熵值的平均值;
根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
此外,为实现上述目的,本发明还提出一种图像处理系统,所述图像处理系统包括:
获取模块,用于获取图像的像素矩阵;
第一计算模块,用于基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
第一处理模块,用于对所述相似度矩阵进行处理,得到所述图像的特征向量矩阵;
第二计算模块,用于根据所述特征向量矩阵计算所述图像的熵值矩阵;
第三计算模块,用于计算所述熵值矩阵中的各个熵值的平均值;
第二处理模块,用于根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
本发明提出的图像处理方法及系统,根据图像的特征向量矩阵得到熵值矩阵,再根据熵值矩阵中的各个熵值计算出一个平均值,最后将所述平均值与所述图像中的各个像素点进行比对,以确定所述图像的目标区域和背景区域,实现了通过图像具体的特性向量,比如图像的纹理特征、图像的整体走势特征确定图像中的目标区域和背景区域,而不仅仅是根据图像的灰度值确定图像的目标区域和背景区域,使得在图像中目标区域和背景区域的颜色差 别不大时,对图像的处理更加准确。
附图说明
图1为本发明图像处理方法第一实施例的流程示意图;
图2为本发明图像处理方法第二实施例的流程示意图;
图3为本发明图像处理系统第一实施例的功能模块示意图;
图4为本发明图像处理系统第二实施例的功能模块示意图。
本发明目的的实现、功能特点及优点将结合实施例,参照附图做进一步说明。
具体实施方式
应当理解,此处所描述的具体实施例仅仅用以解释本发明,并不用于限定本发明。
本发明提供一种图像处理方法。
参照图1,图1为本发明图像处理方法第一实施例的流程示意图。
本实施例提出一种图像处理方法,所述图像处理方法包括:
步骤S10,获取图像的像素矩阵;
在本实施例中,步骤S10之前,包括获取图像的步骤,具体地:在图像是车辆行驶过程中的道路图像时,所述获取图像可通过车辆预设的摄像头进行获取,所述摄像头可为前景摄像机或全景摄像机;在图像是车牌图像,所述获取图像可通过行车记录仪、车牌扫描仪等设备进行获取;在图像为室内或室外的监控图像时,所述获取图像可通过监控摄像头进行获取,进一步地,所述获取图像还可以获取存储的图像。
在本实施例中,对获取的所述图像进行分析,以获取所述图像的各个像素点,然后根据所述图像的各个像素点生成所述图像的像素矩阵I,所述像素矩阵I的表现形式为:[I1 I2 I3 ... In]。
步骤S20,基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
具体地,所述步骤S20的实施方式包括:
1)方式一、在得到所述图像的像素矩阵后,先计算所述像素矩阵中各个 像素点的尺度参数σi,所述尺度参数σi用以下公式计算:
Figure PCTCN2016084864-appb-000001
其中,Id是所述像素矩阵I中第d个点的像素值,m是一个常数,通常设置m=7;
根据上述计算公式即可计算出所述像素矩阵I中的第i个点的尺度参数σi,由于所述像素矩阵I中包括n个数,则可计算出n*n个尺度参数,在得到各个点的尺度参数σi后,根据所述像素矩阵I和计算出的各个尺度参数,即可计算出所述图像的相似度矩阵A,计算所述图像中任意两个像素点的相似度对应的相似度矩阵的公式为:
Aij=exp(-||Ii-Ij||2iσj),i,j∈(1,n)
其中,Aij表示相似度矩阵A的任意元素,σi,σj分别表示所述像素矩阵I中任意点Ii和Ij对应的尺度参数,||Ii-Ij||表示点Ii和Ij的欧氏距离。
根据上述公式,即可计算出相似度矩阵A为:
Figure PCTCN2016084864-appb-000002
在所述相似度矩阵A中,第一行表示像素矩阵中的第一个点分别与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,第二行表示像素矩阵中的第二个点与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,依次类推,最后一行表示像素矩阵中最后一个点分别与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值。可以理解的是,在所述相似度矩阵中,对角线上的点A11、A22、A33、…、Ann的值为零。
2)方式二、所述步骤S20包括以下步骤:
步骤a、将所述图像的像素矩阵转化为灰度矩阵;
步骤b、根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵。
在本实施例中,在得到所述图像的像素矩阵后,对所述图像中各个像素 点求灰度值,所述对所述图像中各个像素点求灰度值的方法包括:平均值法(即将各个像素点的三原色值求平均值,得到各个像素点的灰度值,Gray=(R+G+B)/3)、整数方法(即对各个像素点的三原色值进行加权平均,得到各个像素点的三原色值,Gray=R*0.3+G*0.59+B*0.11)等等。在得到各个点的像素值后,根据上述的算法对像素矩阵中的各个像素点进行计算,得到各个像素点对应的灰度值,最终实现将所述像素矩阵转化为一维的灰度矩阵,所述灰度矩阵用数据集X={x1,x2,...,xn}∈Rd表示,其中,所述xi表示数据集中第i个点的灰度值,i∈(1,n),n为数据集中的数据的个数,d表示数据维数,R代表整个实数集。
在得到所述灰度矩阵X后,根据所述灰度矩阵计算所述图像的相似度矩阵,所述相似度矩阵中包含任意两点间的像素相似度差值,而根据灰度矩阵计算相似度矩阵的方式,先计算出数据集中各个点的尺度参数σi,所述尺度参数σi用以下公式计算:
Figure PCTCN2016084864-appb-000003
其中,xd是数据集X中第d个点的灰度值,m是一个常数,通常设置m=7;
根据上述计算公式即可计算出数据集X中的各个点的尺度参数σi,该数据集中包括n个数,则可计算出n*n个尺度参数,在得到各个点的尺度参数σi后,根据所述数据集X和计算出的各个尺度参数,即可计算出所述图像的相似度矩阵A,计算所述图像中任意两个像素点的相似度对应的相似度矩阵的公式为:
Aij=exp(-||xi-xj||2iσj),i,j∈(1,n)
其中,Aij表示相似度矩阵A的任意元素,σi,σj分别表示数据集中任意点xi和xj对应的尺度参数,||xi-xj||表示点xi和xj的欧氏距离。
根据上述公式,即可计算出相似度矩阵A为:
Figure PCTCN2016084864-appb-000004
同理,在所述相似度矩阵A中,第一行表示数据集中的第一个点分别与 数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,第二行表示数据集中的第二个点与数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,依次类推,最后一行表示数据集中最后一个点分别与数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值。可以理解的是,在所述相似度矩阵中,对角线上的点A11、A22、A33、…、Ann的值为零。
在本实施例中,由于像素矩阵中各个像素点对应的是RGB值,而根据所述RGB值计算所述图像的相似度矩阵,会影响计算结果的准确性,而灰度矩阵是各个像素点的灰度值对应的矩阵,因此,先将所述图像的像素矩阵转化为灰度矩阵,再根据所述灰度矩阵计算所述图像的相似度矩阵,而不是直接根据像素矩阵计算所述相似度矩阵,从而提高了计算相似度矩阵的准确性。
步骤S30,根据所述相似度矩阵得到所述图像的特征向量矩阵;
具体地,所述步骤S30包括步骤:
步骤c、基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
计算得到所述相似度矩阵后,根据所述相似度矩阵计算所述图像的拉普拉斯矩阵,所述拉普拉斯矩阵L用以下公式计算:
L=D-1/2AD1/2
其中,D为对角矩阵,对角矩阵用以下公式计算:
Figure PCTCN2016084864-appb-000005
D表示对角线上的任意元素,也就说计算出所述相似度矩阵后,先计算出对角矩阵,然后根据所述相似度矩阵以及所述对角矩阵,即可计算出图像的拉普拉斯矩阵,所述拉普拉斯矩阵的表现形式为:
Figure PCTCN2016084864-appb-000006
步骤d,对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵。
在得到拉普拉斯矩阵后,对所述拉普拉斯矩阵进行特征分解,设所述特征向量矩阵为V,而所述特征向量矩阵V中的任意一列特性向量为Vi,即V=[Vi V2 ... Vn],该方法涉及矩阵论中的矩阵特征分解方法,由于拉普拉斯矩阵L 是一个N×N的方阵,且有N个线性无关的特征向量Vi(i=1,...,n),这样,L可以被分解为L=V∧V-1,其中V是N×N方阵,且第i列Vi为L的特征向量。∧是对角矩阵,其对角线上的元素为对应的特征值,也即∧ij=λi,最终,计算可得到特征向量矩阵[Vi V2 ... Vn]。
步骤S40,根据所述特征向量矩阵计算所述图像的熵值矩阵;
在本实施例中,得到所述特征向量矩阵后,计算所述特征向量矩阵对应的熵值矩阵,记为E,以Vi为例,设其对应的熵值矩阵为Ei,则:
Figure PCTCN2016084864-appb-000007
那么,熵值矩阵集E为:E=[E1,E2,...,Ei,...,En],i∈(1,n)。
步骤S50,计算所述熵值矩阵中的各个熵值的平均值;
根据得到的所述熵值矩阵,先确定所述熵值矩阵中的各个熵值,然后对所述熵值矩阵中的各个熵值求平均值,得到Emean,计算公式为:
Figure PCTCN2016084864-appb-000008
根据上述公式,即可得到平均值Emean
步骤S60,根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
在本实施例中,得到所述平均值后,将所述平均值作为标准,与所述图像中各个像素点的像素值进行比对,在像素点的像素值大于所述平均值时,确定像素点为黑色像素点,黑色像素点对应的区域即为目标区域;同理,在像素点的像素值小于所述平均值时,确定像素点为白色像素点,白色像素点对应的区域即为背景区域,最终,确定所述图像的目标区域和背景区域。
本实施例提出的图像处理方法,本发明提出的图像处理方法及系统,根据图像的特征向量矩阵得到熵值矩阵,再根据熵值矩阵中的各个熵值计算出一个平均值,最后将所述平均值与所述图像中的各个像素点进行比对,以确定所述图像的目标区域和背景区域,实现了通过图像具体的特性向量,比如图像的纹理特征、图像的整体走势特征确定图像中的目标区域和背景区域,而不仅仅是根据图像的灰度值确定图像的目标区域和背景区域,使得在图像中目标区域和背景区域的颜色差别不大时,对图像的处理更加准确。
进一步地,为了提高图像处理的准确性,基于第一实施例提出本发明图像处理方法的第二实施例,在本实施例中,参照图2,所述步骤S60之后,所述图像处理方法还包括:
步骤S70,对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
在本实施例中,在确定所述图像的目标区域和背景区域后,分别对图像的目标区域和背景区域做双向插值,即对所述图像的目标区域进行强化插值处理,并对所述图像的背景区域进行稀疏插值处理,以突出所述图像的目标区域和背景区域。
具体的,所述强化插值处理为将所述目标区域的所有点的像素值增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值,所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
为更好理解本实施例,举例如下:
利用九宫格强化方法增强目标区域像素值,如表1所示:
75 5 69
59 8 234
252 98 241
表1
表1表示的是图像矩阵中目标区域的任一个九宫格,将该九宫格的元素都相应增加像素值10,增加后如果大于等于255的记为255,如表2所示:
Figure PCTCN2016084864-appb-000009
表2
表2为表1中强化后的九宫格像素值。可以理解的是,所述像素值的增加范围为10-15,具体增加多少根据情况而定。
同理,利用九宫格稀疏化方法虚化背景像素值,如表3所示:
230 59 45
34 35 3
70 56 211
表3
表3表示的是图像矩阵中背景区域的任一个九宫格,将该九宫格的元素都设置成九宫格中心元素35,如表4所示:
35 35 35
35 35 35
35 35 35
表4
表4为表3中虚化后的九宫格像素值。以此类推,将图像中所有的背景像素点都做如此稀疏化处理。可以理解的是,不一定以中心元素35为设置标准,也可将九宫格的元素都设置成34,只要将九宫格的元素都设置成同一个值即可。
在本实施例中,得到图像的目标区域和背景区域后,进一步对所述图像的目标区域进行强化插值处理,并对所述图像的背景区域进行稀疏插值处理,以突出所述图像的目标区域和背景区域,从而使得图像的目标区域和背景区域区分更加明显,提高了图像处理的准确性。
本发明进一步提供一种图像处理系统。
参照图3,图3为本发明图像处理系统第一实施例的功能模块示意图。
需要强调的是,对本领域的技术人员来说,图3所示功能模块图仅仅是一个较佳实施例的示例图,本领域的技术人员围绕图3所示的图像处理系统的功能模块,可轻易进行新的功能模块的补充;各功能模块的名称是自定义名称,仅用于辅助理解该图像处理系统的各个程序功能块,不用于限定本发明的技术方案,本发明技术方案的核心是,各自定义名称的功能模块所要达成的功能。
本实施例提出一种图像处理系统,所述图像处理系统包括:
获取模块10,用于获取图像的像素矩阵;
在本实施例中,所述获取模块10用于获取图像,所述获取模块获取图像具体为:在图像是车辆行驶过程中的道路图像时,所述获取模块10获取图像可通过车辆预设的摄像头进行获取,所述摄像头可为前景摄像机或全景摄像机;在图像是车牌图像,所述获取模块10获取图像可通过行车记录仪、车牌扫描仪等设备进行获取;在图像为室内或室外的监控图像时,所述获取模块10获取图像可通过监控摄像头进行获取,进一步地,所述获取模块10获取图像还可以获取存储的图像。
在本实施例中,所述获取模块10对获取的所述图像进行分析,以获取所述图像的各个像素点,然后根据所述图像的各个像素点生成所述图像的像素矩阵I,所述像素矩阵I的表现形式为:[I1 I2 I3 ... In]。
第一计算模块20,用于基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
具体地,所述第一计算模块20计算所述图像中任意两个像素点的相似度组成的相似度矩阵的实施方式包括:
1)方式一、在得到所述图像的像素矩阵后,所述第一计算模块20先计算所述像素矩阵中各个像素点的尺度参数σi,所述尺度参数σi用以下公式计算:
Figure PCTCN2016084864-appb-000010
其中,Id是所述像素矩阵I中第d个点的像素值,m是一个常数,通常设置m=7;
根据上述计算公式即可计算出所述像素矩阵I中的第i个点的尺度参数σi,由于所述像素矩阵I中包括n个数,则可计算出n*n个尺度参数,在得到各个点的尺度参数σi后,根据所述像素矩阵I和计算出的各个尺度参数,所述第一计算模块20即可计算出所述图像的相似度矩阵A,所述第一计算模块20计算所述图像中任意两个像素点的相似度对应的相似度矩阵的公式为:
Aij=exp(-||Ii-Ij||2iσj),i,j∈(1,n)
其中,Aij表示相似度矩阵A的任意元素,σi,σj分别表示所述像素矩阵 I中任意点Ii和Ij对应的尺度参数,||Ii-Ij||表示点Ii和Ij的欧氏距离。
根据上述公式,即可计算出相似度矩阵A为:
Figure PCTCN2016084864-appb-000011
在所述相似度矩阵A中,第一行表示像素矩阵中的第一个点分别与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,第二行表示像素矩阵中的第二个点与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,依次类推,最后一行表示像素矩阵中最后一个点分别与像素矩阵中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值。可以理解的是,在所述相似度矩阵中,对角线上的点A11、A22、A33、…、Ann的值为零。
2)方式二、所述第一计算模块20包括:
转化单元,用于将所述图像的像素矩阵转化为灰度矩阵;
第一计算单元,用于根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵。
在本实施例中,在得到所述图像的像素矩阵后,对所述图像中各个像素点求灰度值,所述对所述图像中各个像素点求灰度值的方法包括:平均值法(即将各个像素点的三原色值求平均值,得到各个像素点的灰度值,Gray=(R+G+B)/3)、整数方法(即对各个像素点的三原色值进行加权平均,得到各个像素点的三原色值,Gray=R*0.3+G*0.59+B*0.11)等等。在得到各个点的像素值后,所述转化单元根据上述的算法对像素矩阵中的各个像素点进行计算,得到各个像素点对应的灰度值,最终实现将所述像素矩阵转化为一维的灰度矩阵,所述灰度矩阵用数据集X={x1,x2,...,xn}∈Rd表示,其中,所述xi表示数据集中第i个点的灰度值,i∈(1,n),n为数据集中的数据的个数,d表示数据维数,R代表整个实数集。
在得到所述灰度矩阵X后,所述第一计算单元根据所述灰度矩阵计算所述图像的相似度矩阵,所述相似度矩阵中包含任意两点间的像素相似度差值,而根据灰度矩阵计算相似度矩阵的方式,先计算出数据集中各个点的尺度参数σi,所述尺度参数σi用以下公式计算:
Figure PCTCN2016084864-appb-000012
其中,xd是数据集X中第d个点的灰度值,m是一个常数,通常设置m=7;
根据上述计算公式即可计算出数据集X中的各个点的尺度参数σi,该数据集中包括n个数,则可计算出n*n个尺度参数,在得到各个点的尺度参数σi后,根据所述数据集X和计算出的各个尺度参数,所述第一计算单元即可计算出所述图像的相似度矩阵A,所述第一计算单元计算所述图像中任意两个像素点的相似度对应的相似度矩阵的公式为:
Aij=exp(-||xi-xj||2iσj),i,j∈(1,n)
其中,Aij表示相似度矩阵A的任意元素,σi,σj分别表示数据集中任意点xi和xj对应的尺度参数,||xi-xj||表示点xi和xj的欧氏距离。
根据上述公式,即可计算出相似度矩阵A为:
Figure PCTCN2016084864-appb-000013
同理,在所述相似度矩阵A中,第一行表示数据集中的第一个点分别与数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,第二行表示数据集中的第二个点与数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值,依次类推,最后一行表示数据集中最后一个点分别与数据集中的所有点(第一个点、第二个点、第三个点……最后一个点)的相似度差值。可以理解的是,在所述相似度矩阵中,对角线上的点A11、A22、A33、…、Ann的值为零。
在本实施例中,由于像素矩阵中各个像素点对应的是RGB值,而根据所述RGB值计算所述图像的相似度矩阵,会影响计算结果的准确性,而灰度矩阵是各个像素点的灰度值对应的矩阵,因此,先将所述图像的像素矩阵转化为灰度矩阵,再根据所述灰度矩阵计算所述图像的相似度矩阵,而不是直接根据像素矩阵计算所述相似度矩阵,从而提高了计算相似度矩阵的准确性。
第一处理模块30,用于对所述相似度矩阵进行处理,得到所述图像的特征向量矩阵;
具体地,所述第一处理模块30:
第二计算单元,用于基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
计算得到所述相似度矩阵后,所述第二计算单元根据所述相似度矩阵计算所述图像的拉普拉斯矩阵,所述拉普拉斯矩阵L用以下公式计算:
L=D-1/2AD1/2
其中,D为对角矩阵,对角矩阵用以下公式计算:
Figure PCTCN2016084864-appb-000014
D表示对角线上的任意元素,也就说计算出所述相似度矩阵后,所述第二计算单元先计算出对角矩阵,然后根据所述相似度矩阵以及所述对角矩阵,即可计算出图像的拉普拉斯矩阵,所述拉普拉斯矩阵的表现形式为:
Figure PCTCN2016084864-appb-000015
特征分解单元,用于对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵。
在得到拉普拉斯矩阵后,所述特征分解单元对所述拉普拉斯矩阵进行特征分解,设所述特征向量矩阵为V,而所述特征向量矩阵V中的任意一列特性向量为Vi,即V=[Vi V2 ... Vn],该方法涉及矩阵论中的矩阵特征分解方法,由于拉普拉斯矩阵L是一个N×N的方阵,且有N个线性无关的特征向量Vi(i=1,...,n),这样,L可以被分解为L=V∧V-1,其中V是N×N方阵,且第i列Vi为L的特征向量。∧是对角矩阵,其对角线上的元素为对应的特征值,也即∧ij=λi,最终,计算可得到特征向量矩阵[Vi V2 ... Vn]。
第二计算模块40,用于根据所述特征向量矩阵计算所述图像的熵值矩阵;
在本实施例中,得到所述特征向量矩阵后,所述第二计算模块40计算所述特征向量矩阵对应的熵值矩阵,记为E,以Vi为例,设其对应的熵值矩阵为Ei,则:
Figure PCTCN2016084864-appb-000016
那么,熵值矩阵集E为:E=[E1,E2,...,Ei,...,En],i∈(1,n)。
第三计算模块50,用于计算所述熵值矩阵中的各个熵值的平均值;
根据得到的所述熵值矩阵,先确定所述熵值矩阵中的各个熵值,然后所述第三计算模块50对所述熵值矩阵中的各个熵值求平均值,得到Emean,计算公式为:
Figure PCTCN2016084864-appb-000017
根据上述公式,即可得到平均值Emean
第二处理模块60,用于根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
在本实施例中,得到所述平均值后,所述第二处理模块60将所述平均值作为标准,与所述图像中各个像素点的像素值进行比对,在像素点的像素值大于所述平均值时,确定像素点为黑色像素点,黑色像素点对应的区域即为目标区域;同理,在像素点的像素值小于所述平均值时,确定像素点为白色像素点,白色像素点对应的区域即为背景区域,最终,确定所述图像的目标区域和背景区域。
本实施例提出的图像处理系统,本发明提出的图像处理方法及系统,根据图像的特征向量矩阵得到熵值矩阵,再根据熵值矩阵中的各个熵值计算出一个平均值,最后将所述平均值与所述图像中的各个像素点进行比对,以确定所述图像的目标区域和背景区域,实现了通过图像具体的特性向量,比如图像的纹理特征、图像的整体走势特征确定图像中的目标区域和背景区域,而不仅仅是根据图像的灰度值确定图像的目标区域和背景区域,使得在图像中目标区域和背景区域的颜色差别不大时,对图像的处理更加准确。
进一步地,为了提高图像处理的准确性,基于第一实施例提出本发明图像处理系统的第二实施例,在本实施例中,参照图4,所述图像处理系统还包括:
第三处理模块70,用于对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
在本实施例中,在确定所述图像的目标区域和背景区域后,所述第三处理模块70分别对图像的目标区域和背景区域做双向插值,即对所述图像的目标区域进行强化插值处理,并对所述图像的背景区域进行稀疏插值处理,以 突出所述图像的目标区域和背景区域。
具体地,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
为更好理解本实施例,举例如下:
利用九宫格强化方法增强目标区域像素值,如表1所示:
75 5 69
59 8 234
252 98 241
表1
表1表示的是图像矩阵中目标区域的任一个九宫格,将该九宫格的元素都相应增加像素值10,增加后如果大于等于255的记为255,如表2所示:
85 15 79
69 18 244
255 108 251
表2
表2为表1中强化后的九宫格像素值。可以理解的是,所述像素值的增加范围为10-15,具体增加多少根据情况而定。
同理,利用九宫格稀疏化方法虚化背景像素值,如表3所示:
230 59 45
34 35 3
70 56 211
表3
表3表示的是图像矩阵中背景区域的任一个九宫格,将该九宫格的元素 都设置成九宫格中心元素35,如表4所示:
35 35 35
35 35 35
35 35 35
表4
表4为表3中虚化后的九宫格像素值。以此类推,将图像中所有的背景像素点都做如此稀疏化处理。可以理解的是,不一定以中心元素35为设置标准,也可将九宫格的元素都设置成34,只要将九宫格的元素都设置成同一个值即可。
在本实施例中,得到图像的目标区域和背景区域后,进一步对所述图像的目标区域进行强化插值处理,并对所述图像的背景区域进行稀疏插值处理,以突出所述图像的目标区域和背景区域,从而使得图像的目标区域和背景区域区分更加明显,提高了图像处理的准确性。
以上仅为本发明的优选实施例,并非因此限制本发明的专利范围,凡是利用本发明说明书及附图内容所作的等效结构或等效流程变换,或直接或间接运用在其它相关的技术领域,均同理包括在本发明的专利保护范围内。

Claims (20)

  1. 一种图像处理方法,其特征在于,所述图像处理方法包括以下步骤:
    获取图像的像素矩阵;
    将所述图像的像素矩阵转化为灰度矩阵;
    根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
    基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
    对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵;
    根据所述特征向量矩阵计算所述图像的熵值矩阵;
    计算所述熵值矩阵中的各个熵值的平均值;
    根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
  2. 如权利要求1所述的图像处理方法,其特征在于,所述根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域的步骤之后,所述图像处理方法还包括:
    对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
  3. 如权利要求2所述的图像处理方法,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
  4. 一种图像处理方法,其特征在于,所述图像处理方法包括以下步骤:
    获取图像的像素矩阵;
    基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
    根据所述相似度矩阵得到所述图像的特征向量矩阵;
    根据所述特征向量矩阵计算所述图像的熵值矩阵;
    计算所述熵值矩阵中的各个熵值的平均值;
    根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
  5. 如权利要求4所述的图像处理方法,其特征在于,所述基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵的步骤包括:
    将所述图像的像素矩阵转化为灰度矩阵;
    根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵。
  6. 如权利要求4所述的图像处理方法,其特征在于,所述根据所述相似度矩阵得到所述图像的特征向量矩阵的步骤包括:
    基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
    对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵。
  7. 如权利要求4所述的图像处理方法,其特征在于,所述根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域的步骤之后,所述图像处理方法还包括:
    对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
  8. 如权利要求5所述的图像处理方法,其特征在于,所述根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域的步骤之后,所述图像处理方法还包括:
    对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
  9. 如权利要求6所述的图像处理方法,其特征在于,所述根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域的步骤之后,所述图像处理方法还包括:
    对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
  10. 如权利要求7所述的图像处理方法,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
  11. 如权利要求8所述的图像处理方法,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
  12. 如权利要求9所述的图像处理方法,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
  13. 一种图像处理系统,其特征在于,所述图像处理系统包括:
    获取模块,用于获取图像的像素矩阵;
    第一计算模块,用于基于所述图像的像素矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵;
    第一处理模块,用于对所述相似度矩阵进行处理,得到所述图像的特征向量矩阵;
    第二计算模块,用于根据所述特征向量矩阵计算所述图像的熵值矩阵;
    第三计算模块,用于计算所述熵值矩阵中的各个熵值的平均值;
    第二处理模块,用于根据所述平均值对所述图像进行二值化处理,以确定所述图像的目标区域和背景区域。
  14. 如权利要求13所述的图像处理系统,其特征在于,所述第一计算模块包括:
    转化单元,用于将所述图像的像素矩阵转化为灰度矩阵;
    第一计算单元,用于根据所述灰度矩阵,计算所述图像中任意两个像素点的相似度组成的相似度矩阵。
  15. 如权利要求13所述的图像处理系统,其特征在于,所述第一处理模块包括:
    第二计算单元,用于基于所述相似度矩阵计算所述图像的拉普拉斯矩阵;
    特征分解单元,用于对所述拉普拉斯矩阵进行特征分解,得到所述图像的特征向量矩阵。
  16. 如权利要求13所述的图像处理系统,其特征在于,所述图像处理系统还包括:
    第三处理模块,用于对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理。
  17. 如权利要求14所述的图像处理系统,其特征在于,所述图像处理系统还包括:
    第三处理模块,用于对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理
  18. 如权利要求15所述的图像处理系统,其特征在于,所述图像处理系统还包括:
    第三处理模块,用于对所述目标区域进行强化插值处理,并对所述背景区域进行稀疏插值处理
  19. 如权利要求16所述的图像处理系统,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
  20. 如权利要求17所述的图像处理系统,其特征在于,所述强化插值处理为将所述目标区域的所有点的像素值均增加一预设值,当某一点增加后的像素值超过上限像素值时,将该点的像素值记为所述上限像素值;
    所述稀疏插值处理为将所述背景区域的所有点的像素值固定至一预设值。
PCT/CN2016/084864 2015-12-22 2016-06-04 图像处理方法及系统 WO2017107395A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510973455.6 2015-12-22
CN201510973455.6A CN105389825B (zh) 2015-12-22 2015-12-22 图像处理方法及系统

Publications (1)

Publication Number Publication Date
WO2017107395A1 true WO2017107395A1 (zh) 2017-06-29

Family

ID=55422074

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2016/084864 WO2017107395A1 (zh) 2015-12-22 2016-06-04 图像处理方法及系统

Country Status (2)

Country Link
CN (1) CN105389825B (zh)
WO (1) WO2017107395A1 (zh)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112834541A (zh) * 2020-01-03 2021-05-25 上海纽迈电子科技有限公司 一种钠含量及钠分布的测试方法
CN113963311A (zh) * 2021-10-22 2022-01-21 江苏安泰信息科技发展有限公司 一种安全生产风险视频监控方法及系统

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105389825B (zh) * 2015-12-22 2018-11-23 深圳Tcl数字技术有限公司 图像处理方法及系统
CN111366642B (zh) * 2020-04-02 2023-03-28 中国航空制造技术研究院 基于仪器屏幕显示波形的探头超声信号的频谱分析方法

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0188193A2 (en) * 1985-01-15 1986-07-23 International Business Machines Corporation Method and apparatus for processing image data
US5825363A (en) * 1996-05-24 1998-10-20 Microsoft Corporation Method and apparatus for determining visible surfaces
CN103295022A (zh) * 2012-02-24 2013-09-11 富泰华工业(深圳)有限公司 图像相似度计算系统及方法
CN104392233A (zh) * 2014-11-21 2015-03-04 宁波大学 一种基于区域的图像显著图提取方法
CN105005980A (zh) * 2015-07-21 2015-10-28 深圳Tcl数字技术有限公司 图像处理方法及装置
CN105389825A (zh) * 2015-12-22 2016-03-09 深圳Tcl数字技术有限公司 图像处理方法及系统

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102982544B (zh) * 2012-11-21 2015-09-30 清华大学 多前景目标图像交互式分割方法
CN104616292B (zh) * 2015-01-19 2017-07-11 南开大学 基于全局单应矩阵的单目视觉测量方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP0188193A2 (en) * 1985-01-15 1986-07-23 International Business Machines Corporation Method and apparatus for processing image data
US5825363A (en) * 1996-05-24 1998-10-20 Microsoft Corporation Method and apparatus for determining visible surfaces
CN103295022A (zh) * 2012-02-24 2013-09-11 富泰华工业(深圳)有限公司 图像相似度计算系统及方法
CN104392233A (zh) * 2014-11-21 2015-03-04 宁波大学 一种基于区域的图像显著图提取方法
CN105005980A (zh) * 2015-07-21 2015-10-28 深圳Tcl数字技术有限公司 图像处理方法及装置
CN105389825A (zh) * 2015-12-22 2016-03-09 深圳Tcl数字技术有限公司 图像处理方法及系统

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112834541A (zh) * 2020-01-03 2021-05-25 上海纽迈电子科技有限公司 一种钠含量及钠分布的测试方法
CN112834541B (zh) * 2020-01-03 2022-07-29 上海纽迈电子科技有限公司 一种钠含量及钠分布的测试方法
CN113963311A (zh) * 2021-10-22 2022-01-21 江苏安泰信息科技发展有限公司 一种安全生产风险视频监控方法及系统
CN113963311B (zh) * 2021-10-22 2022-07-01 江苏安泰信息科技发展有限公司 一种安全生产风险视频监控方法及系统

Also Published As

Publication number Publication date
CN105389825B (zh) 2018-11-23
CN105389825A (zh) 2016-03-09

Similar Documents

Publication Publication Date Title
WO2017107395A1 (zh) 图像处理方法及系统
CN102891966B (zh) 数码成像设备的对焦方法及装置
US8908989B2 (en) Recursive conditional means image denoising
Wang et al. Novel spatio-temporal structural information based video quality metric
CN109272520B (zh) 一种联合运动指导与边缘检测的自适应红外焦平面非均匀校正方法
JP2017005389A (ja) 画像認識装置、画像認識方法及びプログラム
CN104700412B (zh) 一种视觉显著图的计算方法
US20190362505A1 (en) Image processing apparatus, method, and storage medium to derive optical flow
WO2019056549A1 (zh) 图像增强方法以及图像处理装置
JP2019096222A5 (zh)
CN114998122A (zh) 一种低照度图像增强方法
Liang et al. A no-reference perceptual blur metric using histogram of gradient profile sharpness
JP7114431B2 (ja) 画像処理方法、画像処理装置およびプログラム
US10728476B2 (en) Image processing device, image processing method, and image processing program for determining a defective pixel
JP7301589B2 (ja) 画像処理装置、画像処理方法、およびプログラム
WO2020051897A1 (zh) 图像融合方法、系统、电子设备和计算机可读介质
Temel et al. BLeSS: Bio-inspired low-level spatiochromatic similarity assisted image quality assessment
CN107680068A (zh) 一种考虑图像自然度的数字图像增强方法
JP2018160024A (ja) 画像処理装置、画像処理方法及びプログラム
Gao et al. Image quality assessment using image description in information theory
WO2019116975A1 (ja) 画像処理方法、画像処理装置およびプログラム
CN117006947B (zh) 一种低光照图像增强的高层建筑结构位移测量方法及系统
CN116030417B (zh) 一种员工识别方法、装置、设备、介质及产品
Chen et al. An adaptive regression method for infrared blind-pixel compensation
Gui-Feng et al. No-reference aerial image quality assessment based on natural scene statisticsand color correlation blur metric

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16877208

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 16.11.2018)

122 Ep: pct application non-entry in european phase

Ref document number: 16877208

Country of ref document: EP

Kind code of ref document: A1