Disclosure of Invention
In order to solve the technical problem, the invention aims to provide a method for detecting the imaging quality of a vehicle-mounted camera, which adopts the following technical scheme:
one embodiment of the invention provides a method for detecting the imaging quality of a vehicle-mounted camera, which comprises the following steps:
taking the advancing process of the vehicle leaving the tunnel as a test process to acquire the continuous data acquired by the vehicle-mounted camera in the test processMFrame imaging images are converted into corresponding gray level images;Mis a positive integer;
obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value of the gray level image and the texture complexity
NFrame tunnel boundary diagramLike the image of the eye(s) to be,
and is and
Nis a positive integer; acquiring the local uniformity degree of each tunnel boundary image;
performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity;
and evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number.
Preferably, the texture complexity obtaining method includes:
and acquiring a gray level co-occurrence matrix of each gray level image, calculating an entropy value of a gray level entropy of the gray level co-occurrence matrix, and taking a normalization result of the entropy value as the texture complexity.
Preferably, the screening process of the tunnel boundary image is as follows:
forming feature binary groups by the gray level mean value and the texture complexity, clustering all the feature binary groups to obtain a first category representing images inside the tunnel and a second category representing images outside the tunnel, and continuously classifying the first category and the second categoryNThe frame image is used as a tunnel boundary image.
Preferably, the first category and the second category are obtained by:
and calculating Euclidean distance between the two tuples corresponding to the centers of the two classes, wherein the two adjacent classes with the maximum Euclidean distance are used as an inner and outer group, the class corresponding to the tuple with lower texture complexity in the inner and outer groups is used as a first class, and the class corresponding to the tuple with higher texture complexity is used as a second class.
Preferably, the method for obtaining the local uniformity degree comprises the following steps:
and acquiring the local uniformity according to the occurrence frequency of each pixel pair in the gray level co-occurrence matrix and the pixel difference of the pixel pair.
Preferably, the process of acquiring the image quality of the tunnel boundary image is as follows:
taking the region texture complexity of the inner region of the tunnel as a first region texture complexity, and calculating a first difference between the first region texture complexity and the texture complexity corresponding to the class center of the first class;
taking the region texture complexity of the tunnel portal region as a second region texture complexity, and calculating a second difference between the second region texture complexity and the texture complexity corresponding to the class center of the second class;
and acquiring a second area proportion occupied by the tunnel inner area based on the first area proportion occupied by the tunnel opening area, and obtaining the image quality of the tunnel boundary image according to the first area proportion, the first difference, the second area proportion, the second difference and the local uniformity degree.
The embodiment of the invention at least has the following beneficial effects:
1. the method has the advantages that the strength of the white hole effect generated in the process of leaving the tunnel by the vehicle is represented according to the local uniformity degree of the border image of the tunnel, the image quality is obtained by combining the difference condition of the texture complexity between different areas, the imaging quality of the vehicle-mounted camera is evaluated according to the difference and uniformity degree difference of the image quality of the border image of adjacent frames of the tunnel and the corresponding frame number, the adaptability of the vehicle-mounted camera can be evaluated under the condition that the light change is obvious, the objective influence of poor overall image quality caused by the difference of the illumination environment of the image is avoided, and the accuracy of image quality judgment is improved.
2. And screening out tunnel junction images according to the gray average value and the texture complexity of the gray images, and accurately acquiring image sequences at tunnel junctions from images acquired at different driving times to increase the generalization capability of the system.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined object, the following detailed description is provided with reference to the accompanying drawings and preferred embodiments for the imaging quality detection method of the vehicle-mounted camera according to the present invention, and its specific implementation, structure, features and effects. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the imaging quality detection method for the vehicle-mounted camera in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for detecting imaging quality of a vehicle-mounted camera according to an embodiment of the present invention is shown, where the method includes the following steps:
step S001, taking the advancing process of the vehicle leaving the tunnel as a test process, and acquiring the continuous data acquired by the vehicle-mounted camera in the test processMFrame imaging images are converted into corresponding gray level images;Mis a positive integer.
The method comprises the following specific steps:
1. and acquiring continuous frame images of the vehicle in the process of leaving the tunnel by using the vehicle-mounted camera.
In order to avoid that the adjustment capability of the camera on the image under different illumination conditions cannot be reflected under the set illumination conditions, the condition that the light and shade change is large at the junction of the tunnel in the actual environment is utilized, the advancing process of the vehicle leaving the tunnel is taken as a test process, and the continuous collected by the vehicle-mounted camera in the test process is obtainedMThe frame is imaged with the image,Mis a positive integer.
2. Each imaged image is converted to a grayscale image.
The gray level method has a plurality of methods, and the image is gray level by using the RGB value of the pixel point through the weighted average method in the embodiment of the invention, so that the imaging image is converted into the gray level image to obtain continuous gray level imagesMFrame gray scale image.
In other embodiments, other algorithms that can achieve the same effect, such as a component method, a maximum value method, or an average value method, may also be used to achieve graying of the image.
S002, obtaining texture complexity of each gray level image, and screening out continuous images according to the gray level mean value and the texture complexity of the gray level image
NThe boundary image of the frame tunnel is displayed,
and is and
Nis a positive integer; and acquiring the local uniformity of each tunnel boundary image.
The method comprises the following specific steps:
1. and acquiring the texture complexity of each gray level image.
And acquiring a gray level co-occurrence matrix of each gray level image, calculating an entropy value of a gray level entropy of the gray level co-occurrence matrix, and taking a normalization result of the entropy value as texture complexity.
By the first
Taking a gray image as an example, obtaining a gray co-occurrence matrix corresponding to the gray image, where the matrix is
The matrix is a matrix of a plurality of matrices,
is the number of grey levels in the grey scale image, wherein the first in the matrix
Go to the first
Element corresponding to column
Is shown as
Individual gray value and the first
The frequency of occurrence of pixel pairs consisting of gray values.
And calculating the entropy value of the gray entropy of the gray co-occurrence matrix, and taking the result after the entropy value normalization as the image texture complexity.
2. And screening the tunnel boundary image.
Forming feature binary groups by using the gray level mean value and the texture complexity, clustering all the feature binary groups to obtain a first class representing images inside the tunnel and a second class representing images outside the tunnel, and continuously obtaining the first class and the second classNThe frame image is used as a tunnel boundary image.
Forming the second by gray mean and texture complexity
Characteristic binary group of Zhang grayscale image
Wherein, in the step (A),
is shown as
The mean value of the gray levels of the tone map,
is shown as
And (3) expanding the texture complexity of the gray level image, wherein the characteristic binary groups of all the gray level images form a characteristic sequence.
The method comprises the steps of establishing a two-dimensional rectangular coordinate system by taking a gray value as a transverse axis and a texture complexity as a longitudinal axis, mapping characteristic binary groups in a characteristic sequence into the rectangular coordinate system, wherein the distribution of the characteristic binary groups corresponding to an internal image and an external image of a tunnel in the rectangular coordinate system is concentrated due to the fact that the internal environment of the tunnel is single and the external environment of the tunnel does not change greatly in a short time, clustering the characteristic binary groups in the characteristic sequence by using mean shift clustering, and obtaining a plurality of clustering types and corresponding clustering centers thereof, wherein each clustering type is an image set with similar characteristics.
Since the external scene of the tunnel does not change greatly in a short time, but the natural environment is complex, in the image acquisition process, after clustering is performed according to the difference between the characteristic binary groups of the external images of the tunnel, the external images of the tunnel may correspond to a plurality of clustering categories, and only the external images of the tunnel nearest to the tunnel portal need to be analyzed, so that the categories corresponding to the internal images and the external images nearest to the tunnel portal need to be screened out.
And calculating Euclidean distance between the two tuples corresponding to the centers of the two classes, wherein the two adjacent classes with the maximum Euclidean distance are used as an inner and outer group, the class corresponding to the tuple with lower texture complexity in the inner and outer groups is used as a first class, and the class corresponding to the tuple with higher texture complexity is used as a second class.
Due to the fact that the environment inside the tunnel is simple, the texture complexity inside the tunnel is low, and images inside the tunnel are highly similar, and therefore the texture complexity and the gray level change in the images inside the tunnel are not large in difference; the natural environment outside the tunnel is complex, so the texture complexity is high, in addition, the change of the environment outside the tunnel is small in a short time, so the image outside the tunnel has high similarity, namely, the texture complexity and the gray level change in the image outside the tunnel are not very different in a short time, therefore, after clustering, a group of adjacent classes with the maximum Euclidean distance between the class centers of the adjacent classes is selected as an inner and outer group, although the large inner environment is single and the complexity is low, the class corresponding to the binary group with low texture complexity in the inner and outer groups is taken as a first class, and the class corresponding to the binary group with high texture complexity is taken as a second class.
The first category is marked as category A, and the corresponding binary group of the clustering center is
The second class is marked as class B, and the corresponding two-tuple of the clustering center is
。
Because the junction of the tunnel is changed continuously, the distribution of the characteristic binary groups corresponding to the images acquired in the period is scattered, clustering categories cannot be formed, and the images are continuously classifiedMIn the frame gray level image, the image set corresponding to the first category is a multi-frame tunnel internal image, the image set corresponding to the second category is a multi-frame tunnel external image which is most adjacent to the tunnel opening, and therefore, the continuous images between the first category and the second categoryNThe frame image is a tunnel boundary image, and the tunnel boundary image includes a tunnel interior and a tunnel portal.
It should be noted that the tunnel boundary image is a discrete point that cannot form a cluster type, and during the driving process, there may be a case where an individual image is abnormal, for example, a high beam driving opposite in a tunnel has a relatively large brightness at a certain moment, and a discrete point is formed, and a noise point is formed, so that the discrete point cannot be directly selected as the tunnel boundary image.
3. And acquiring the local uniformity according to the occurrence frequency of each pixel pair in the gray level co-occurrence matrix and the pixel difference of the pixel pair.
To a first order
Taking a tunnel boundary image as an example, according to the gray level co-occurrence matrix corresponding to the image, calculating the local uniformity degree of the matrix
:
When the probability of occurrence of a pixel pair with a small difference in gray scale value in an image is higher, the more uniform the gray scale distribution in a small area is represented, that is, the degree of local uniformity
The larger the value of (b) is, the more uniformly distributed the local area of the image is.
When the tunnel opening area is too bright, a white hole effect occurs, and besides enabling the whole tunnel opening area to be brighter, the white hole effect enables the tunnel inner area in the image to be too dark and texture information to be lost, namely the gray value change of the tunnel opening area is smaller, the gray value change in the tunnel inner area is also smaller, namely when the white hole effect occurs in the image, the gray value change of the local area of the image is more uniform.
Therefore, the larger the local uniformity, the stronger the white hole effect, and when the white hole effect in an image is more serious, the higher the possibility of potential safety hazard exists, the poorer the quality of the image.
Step S003, performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; and obtaining the image quality of the tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity.
The method comprises the following specific steps:
1. and performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel inner area in the tunnel boundary image.
And performing self-adaptive threshold segmentation on each tunnel boundary image to obtain a tunnel entrance region, wherein the rest regions are tunnel internal regions, so that the tunnel entrance region and the tunnel internal regions at the tunnel boundary are divided.
2. And acquiring the texture complexity of the areas of the tunnel mouth and the inner area of the tunnel, and further acquiring the first difference and the second difference.
And calculating a first difference between the texture complexity of the first region and the texture complexity corresponding to the class center of the first class by taking the region texture complexity of the inner region of the tunnel as the first region texture complexity.
And acquiring a gray level co-occurrence matrix corresponding to a tunnel internal area in the tunnel boundary image, and calculating the gray level entropy of the gray level co-occurrence matrix of the area to obtain the area texture complexity of the tunnel internal area as the first area texture complexity.
Calculating texture complexity of a first region
A first difference between texture complexities corresponding to class centers of a first class
。
And calculating a second difference between the texture complexity of the second region and the texture complexity corresponding to the class center of the second class by taking the region texture complexity of the tunnel portal region as the second region texture complexity.
And acquiring a gray level co-occurrence matrix corresponding to a tunnel portal area in the tunnel boundary image, and calculating the gray level entropy of the gray level co-occurrence matrix of the area to obtain the area texture complexity of the tunnel portal area as the second area texture complexity.
Calculating a second region texture complexityDegree of rotation
A second difference between the complexity of the texture corresponding to the class center of the second class
。
3. And acquiring the image quality of the tunnel boundary image.
And acquiring a second area proportion occupied by the tunnel inner area based on the first area proportion occupied by the tunnel opening area, and obtaining the image quality of the tunnel boundary image according to the first area proportion, the first difference, the second area proportion, the second difference and the local uniformity degree.
Also in the first place
Taking a tunnel boundary image as an example, the specific calculation formula is as follows:
wherein the content of the first and second substances,
denotes the first
The image quality of the image bordering the sheet tunnel,
is shown as
The degree of local uniformity of the sheet tunnel boundary image,
the area of the tunnel portal area is indicated,
is shown as
The area of the image bordering the sheet tunnel,
representing a first proportion of the area occupied by the tunnel portal area,
representing a second proportion of the area occupied by the inner region of the tunnel,
the natural logarithm with the natural constant e as the base number is shown, that is, the preset value in the embodiment of the present invention is the natural constant e.
In other embodiments, the preset value may also be other natural numbers greater than 1.
First area ratio
Indicating the degree of influence of the tunnel portal region on the overall image quality.
When the vehicle-mounted camera is closer to the tunnel portal, the texture difference between the internal area of the tunnel boundary image and the internal image of the tunnel is larger, and the first difference is larger; the more similar the texture of the tunnel mouth region and the outside of the tunnel, the smaller the second difference; while the larger the first area proportion, the smaller the second area proportion.
As the negative exponent of the natural constant e, the closer to 0, the larger the exponential function result is, and the faster the increase speed is; the closer to 1, the smaller the exponential function result, the smaller the reduction speed, and therefore, when the second difference is smaller, the larger the obtained function result, the larger the corresponding coefficient,
the larger the value, the better the image quality.
Similarly, the farther away from the tunnelWhen crossing, the smaller the first difference is, the larger the second area proportion corresponding to the first difference is, the obtained
The greater the value of (a) is,
the larger the value, the better the image quality.
When the vehicle-mounted camera is positioned in the middle of the junction of the tunnel, namely the moment when the white hole effect occurs, the first difference and the second difference are both larger, the corresponding coefficients are closer,
and
all have smaller values to obtain the final
The smaller the value, the worse the image quality.
And step S004, evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary image of the adjacent frame tunnel and the corresponding frame number.
Specifically, a specific formula for evaluating the imaging quality of the vehicle-mounted camera is as follows:
wherein Q represents the imaging quality of the vehicle-mounted camera,
indicating the number of frames of the tunnel boundary image.
The image quality obtained in the step S003 is the objective image quality of each image, and for each vehicle-mounted camera, the larger the white hole effect is, the poorer the quality of the acquired image is, which is certain, but the vehicle-mounted camera with good imaging quality can adapt to light change as soon as possible to adjust the imaging quality.
For the process of calculating the imaging quality Q of the in-vehicle camera,
the smaller the value of (c), the earlier the acquisition time is indicated,
the rate of disappearance of the white hole effect is shown, and since the white hole effect is a gradual process of disappearance, it corresponds to
Is gradually reduced, and the change to the direction of reducing the local uniformity degree is faster when the local uniformity degree in the adjacent frame images is shorter, namely, when the local uniformity degree is reduced
The smaller the image is, the larger the local uniformity degree in the adjacent frame images is reduced, the higher the adjusting speed of the corresponding vehicle-mounted camera on light change is, the larger the imaging quality Q is, and the better the imaging quality is; when the white hole effect disappears, the local uniformity degree of the image is smaller, and the image quality is gradually increased until the image quality is stable, so that the local uniformity degree can be changed to the direction of better image quality at the earlier moment, namely at the moment
When the image quality of the vehicle-mounted camera is smaller, the image quality in the adjacent frame images is increased more, the imaging quality Q is larger, and the imaging quality of the vehicle-mounted camera is better.
In summary, in the embodiment of the invention, the running process of the vehicle leaving the tunnel is taken as the test process, and the continuous collected by the vehicle-mounted camera in the test process is obtainedMImaging the image frame and converting the image frame into a corresponding gray image; obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value and the texture complexity of the gray level imageNFrame tunnel boundary images; obtaining boundary graph of each tunnelLocal degree of homogeneity of the image; performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity; and evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number. The embodiment of the invention can evaluate the adaptive capacity of the vehicle-mounted camera under the condition of obvious light change, avoids the influence of an objective environment and improves the evaluation accuracy.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.