CN114820623B - Imaging quality detection method for vehicle-mounted camera - Google Patents

Imaging quality detection method for vehicle-mounted camera Download PDF

Info

Publication number
CN114820623B
CN114820623B CN202210752630.9A CN202210752630A CN114820623B CN 114820623 B CN114820623 B CN 114820623B CN 202210752630 A CN202210752630 A CN 202210752630A CN 114820623 B CN114820623 B CN 114820623B
Authority
CN
China
Prior art keywords
tunnel
image
texture complexity
gray level
vehicle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202210752630.9A
Other languages
Chinese (zh)
Other versions
CN114820623A (en
Inventor
刘朋
刘鲁冉
薛新芹
杜犇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Luran Optoelectronics Weishan Co ltd
Original Assignee
Luran Optoelectronics Weishan Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Luran Optoelectronics Weishan Co ltd filed Critical Luran Optoelectronics Weishan Co ltd
Priority to CN202210752630.9A priority Critical patent/CN114820623B/en
Publication of CN114820623A publication Critical patent/CN114820623A/en
Application granted granted Critical
Publication of CN114820623B publication Critical patent/CN114820623B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • G06F18/24133Distances to prototypes
    • G06F18/24137Distances to cluster centroïds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • G06T7/41Analysis of texture based on statistical description of texture
    • G06T7/45Analysis of texture based on statistical description of texture using co-occurrence matrix computation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30168Image quality inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30248Vehicle exterior or interior
    • G06T2207/30252Vehicle exterior; Vicinity of vehicle
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02TCLIMATE CHANGE MITIGATION TECHNOLOGIES RELATED TO TRANSPORTATION
    • Y02T10/00Road transport of goods or passengers
    • Y02T10/10Internal combustion engine [ICE] based vehicles
    • Y02T10/40Engine management systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • Artificial Intelligence (AREA)
  • General Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Probability & Statistics with Applications (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Quality & Reliability (AREA)
  • Biomedical Technology (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to the technical field of image data processing, in particular to a method for detecting the imaging quality of a vehicle-mounted cameraMImaging the image frame and converting the image frame into a corresponding gray image; obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value and the texture complexity of the gray level imageNFrame tunnel boundary images; acquiring the local uniformity degree of each tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the area proportion and the area texture complexity; and evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number. The embodiment of the invention can evaluate the adaptive capacity of the vehicle-mounted camera under the condition of obvious light change, avoids the influence of an objective environment and improves the evaluation accuracy.

Description

Imaging quality detection method for vehicle-mounted camera
Technical Field
The invention relates to the technical field of image data processing, in particular to a method for detecting the imaging quality of a vehicle-mounted camera.
Background
Four phases of automatic driving are distinguished according to the level of automation: driving assistance, partial automation, high automation and full automation. Nowadays, automatic driving of vehicles is mostly driving assistance and partially automated, wherein a Driving Assistance System (DAS) provides assistance to the driver, including providing important or beneficial driving-related information, and issuing clear and concise warnings when situations begin to become critical. Such as a "lane departure warning" (LDW) system, etc. Some automated systems are systems that can automatically intervene when a driver receives a warning but fails to take appropriate action in a timely manner, such as "automatic emergency braking" (AEB) systems and "emergency lane assist" (ELA) systems. Also some automatic driving of vehicles is a highly automated system, which can take over the responsibility of operating the vehicle instead of the driver for longer or shorter periods of time, but still requires a system in which the driver monitors the driving activity.
In any automatic driving stage, the surrounding traffic conditions need to be known through a video camera, a radar sensor and a laser range finder, and the road ahead needs to be navigated through a detailed map. In order to drive safety, the vehicle-mounted camera needs to keep a stable working state for a long time under various complex working conditions such as strong shimmer, vibration and the like to acquire stable, reliable and clear surrounding environment data, so that the requirement on the imaging quality of the vehicle-mounted camera is high, and the imaging quality of the vehicle-mounted camera needs to be detected.
The existing evaluation of the image quality of a vehicle-mounted camera is to evaluate the quality of the camera by acquiring the imaging effect of different graphics cards acquired under the set illumination condition, but the illumination environment in the actual driving process is complex, and the adjustment capability of the camera on the image under different illumination conditions of the actual environment cannot be reflected by manually controlled illumination change, so that certain limitation exists.
Disclosure of Invention
In order to solve the technical problem, the invention aims to provide a method for detecting the imaging quality of a vehicle-mounted camera, which adopts the following technical scheme:
one embodiment of the invention provides a method for detecting the imaging quality of a vehicle-mounted camera, which comprises the following steps:
taking the advancing process of the vehicle leaving the tunnel as a test process to acquire the continuous data acquired by the vehicle-mounted camera in the test processMFrame imaging images are converted into corresponding gray level images;Mis a positive integer;
obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value of the gray level image and the texture complexityNFrame tunnel boundary diagramLike the image of the eye(s) to be,
Figure 329155DEST_PATH_IMAGE001
and is andNis a positive integer; acquiring the local uniformity degree of each tunnel boundary image;
performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity;
and evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number.
Preferably, the texture complexity obtaining method includes:
and acquiring a gray level co-occurrence matrix of each gray level image, calculating an entropy value of a gray level entropy of the gray level co-occurrence matrix, and taking a normalization result of the entropy value as the texture complexity.
Preferably, the screening process of the tunnel boundary image is as follows:
forming feature binary groups by the gray level mean value and the texture complexity, clustering all the feature binary groups to obtain a first category representing images inside the tunnel and a second category representing images outside the tunnel, and continuously classifying the first category and the second categoryNThe frame image is used as a tunnel boundary image.
Preferably, the first category and the second category are obtained by:
and calculating Euclidean distance between the two tuples corresponding to the centers of the two classes, wherein the two adjacent classes with the maximum Euclidean distance are used as an inner and outer group, the class corresponding to the tuple with lower texture complexity in the inner and outer groups is used as a first class, and the class corresponding to the tuple with higher texture complexity is used as a second class.
Preferably, the method for obtaining the local uniformity degree comprises the following steps:
and acquiring the local uniformity according to the occurrence frequency of each pixel pair in the gray level co-occurrence matrix and the pixel difference of the pixel pair.
Preferably, the process of acquiring the image quality of the tunnel boundary image is as follows:
taking the region texture complexity of the inner region of the tunnel as a first region texture complexity, and calculating a first difference between the first region texture complexity and the texture complexity corresponding to the class center of the first class;
taking the region texture complexity of the tunnel portal region as a second region texture complexity, and calculating a second difference between the second region texture complexity and the texture complexity corresponding to the class center of the second class;
and acquiring a second area proportion occupied by the tunnel inner area based on the first area proportion occupied by the tunnel opening area, and obtaining the image quality of the tunnel boundary image according to the first area proportion, the first difference, the second area proportion, the second difference and the local uniformity degree.
The embodiment of the invention at least has the following beneficial effects:
1. the method has the advantages that the strength of the white hole effect generated in the process of leaving the tunnel by the vehicle is represented according to the local uniformity degree of the border image of the tunnel, the image quality is obtained by combining the difference condition of the texture complexity between different areas, the imaging quality of the vehicle-mounted camera is evaluated according to the difference and uniformity degree difference of the image quality of the border image of adjacent frames of the tunnel and the corresponding frame number, the adaptability of the vehicle-mounted camera can be evaluated under the condition that the light change is obvious, the objective influence of poor overall image quality caused by the difference of the illumination environment of the image is avoided, and the accuracy of image quality judgment is improved.
2. And screening out tunnel junction images according to the gray average value and the texture complexity of the gray images, and accurately acquiring image sequences at tunnel junctions from images acquired at different driving times to increase the generalization capability of the system.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions and advantages of the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only some embodiments of the present invention, and other drawings can be obtained by those skilled in the art without creative efforts.
Fig. 1 is a flowchart illustrating steps of a method for detecting imaging quality of a vehicle-mounted camera according to an embodiment of the present invention.
Detailed Description
To further illustrate the technical means and effects of the present invention adopted to achieve the predetermined object, the following detailed description is provided with reference to the accompanying drawings and preferred embodiments for the imaging quality detection method of the vehicle-mounted camera according to the present invention, and its specific implementation, structure, features and effects. In the following description, different "one embodiment" or "another embodiment" refers to not necessarily the same embodiment. Furthermore, the particular features, structures, or characteristics may be combined in any suitable manner in one or more embodiments.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs.
The following describes a specific scheme of the imaging quality detection method for the vehicle-mounted camera in detail with reference to the accompanying drawings.
Referring to fig. 1, a flowchart of steps of a method for detecting imaging quality of a vehicle-mounted camera according to an embodiment of the present invention is shown, where the method includes the following steps:
step S001, taking the advancing process of the vehicle leaving the tunnel as a test process, and acquiring the continuous data acquired by the vehicle-mounted camera in the test processMFrame imaging images are converted into corresponding gray level images;Mis a positive integer.
The method comprises the following specific steps:
1. and acquiring continuous frame images of the vehicle in the process of leaving the tunnel by using the vehicle-mounted camera.
In order to avoid that the adjustment capability of the camera on the image under different illumination conditions cannot be reflected under the set illumination conditions, the condition that the light and shade change is large at the junction of the tunnel in the actual environment is utilized, the advancing process of the vehicle leaving the tunnel is taken as a test process, and the continuous collected by the vehicle-mounted camera in the test process is obtainedMThe frame is imaged with the image,Mis a positive integer.
2. Each imaged image is converted to a grayscale image.
The gray level method has a plurality of methods, and the image is gray level by using the RGB value of the pixel point through the weighted average method in the embodiment of the invention, so that the imaging image is converted into the gray level image to obtain continuous gray level imagesMFrame gray scale image.
In other embodiments, other algorithms that can achieve the same effect, such as a component method, a maximum value method, or an average value method, may also be used to achieve graying of the image.
S002, obtaining texture complexity of each gray level image, and screening out continuous images according to the gray level mean value and the texture complexity of the gray level imageNThe boundary image of the frame tunnel is displayed,
Figure 46575DEST_PATH_IMAGE001
and is andNis a positive integer; and acquiring the local uniformity of each tunnel boundary image.
The method comprises the following specific steps:
1. and acquiring the texture complexity of each gray level image.
And acquiring a gray level co-occurrence matrix of each gray level image, calculating an entropy value of a gray level entropy of the gray level co-occurrence matrix, and taking a normalization result of the entropy value as texture complexity.
By the first
Figure 363156DEST_PATH_IMAGE002
Taking a gray image as an example, obtaining a gray co-occurrence matrix corresponding to the gray image, where the matrix is
Figure 584053DEST_PATH_IMAGE003
The matrix is a matrix of a plurality of matrices,
Figure 203253DEST_PATH_IMAGE004
is the number of grey levels in the grey scale image, wherein the first in the matrix
Figure 627543DEST_PATH_IMAGE005
Go to the first
Figure 498547DEST_PATH_IMAGE006
Element corresponding to column
Figure 964163DEST_PATH_IMAGE007
Is shown as
Figure 613320DEST_PATH_IMAGE005
Individual gray value and the first
Figure 898807DEST_PATH_IMAGE006
The frequency of occurrence of pixel pairs consisting of gray values.
And calculating the entropy value of the gray entropy of the gray co-occurrence matrix, and taking the result after the entropy value normalization as the image texture complexity.
2. And screening the tunnel boundary image.
Forming feature binary groups by using the gray level mean value and the texture complexity, clustering all the feature binary groups to obtain a first class representing images inside the tunnel and a second class representing images outside the tunnel, and continuously obtaining the first class and the second classNThe frame image is used as a tunnel boundary image.
Forming the second by gray mean and texture complexity
Figure 307923DEST_PATH_IMAGE002
Characteristic binary group of Zhang grayscale image
Figure 985636DEST_PATH_IMAGE008
Wherein, in the step (A),
Figure 212218DEST_PATH_IMAGE009
is shown as
Figure 391526DEST_PATH_IMAGE002
The mean value of the gray levels of the tone map,
Figure 322442DEST_PATH_IMAGE010
is shown as
Figure 638017DEST_PATH_IMAGE002
And (3) expanding the texture complexity of the gray level image, wherein the characteristic binary groups of all the gray level images form a characteristic sequence.
The method comprises the steps of establishing a two-dimensional rectangular coordinate system by taking a gray value as a transverse axis and a texture complexity as a longitudinal axis, mapping characteristic binary groups in a characteristic sequence into the rectangular coordinate system, wherein the distribution of the characteristic binary groups corresponding to an internal image and an external image of a tunnel in the rectangular coordinate system is concentrated due to the fact that the internal environment of the tunnel is single and the external environment of the tunnel does not change greatly in a short time, clustering the characteristic binary groups in the characteristic sequence by using mean shift clustering, and obtaining a plurality of clustering types and corresponding clustering centers thereof, wherein each clustering type is an image set with similar characteristics.
Since the external scene of the tunnel does not change greatly in a short time, but the natural environment is complex, in the image acquisition process, after clustering is performed according to the difference between the characteristic binary groups of the external images of the tunnel, the external images of the tunnel may correspond to a plurality of clustering categories, and only the external images of the tunnel nearest to the tunnel portal need to be analyzed, so that the categories corresponding to the internal images and the external images nearest to the tunnel portal need to be screened out.
And calculating Euclidean distance between the two tuples corresponding to the centers of the two classes, wherein the two adjacent classes with the maximum Euclidean distance are used as an inner and outer group, the class corresponding to the tuple with lower texture complexity in the inner and outer groups is used as a first class, and the class corresponding to the tuple with higher texture complexity is used as a second class.
Due to the fact that the environment inside the tunnel is simple, the texture complexity inside the tunnel is low, and images inside the tunnel are highly similar, and therefore the texture complexity and the gray level change in the images inside the tunnel are not large in difference; the natural environment outside the tunnel is complex, so the texture complexity is high, in addition, the change of the environment outside the tunnel is small in a short time, so the image outside the tunnel has high similarity, namely, the texture complexity and the gray level change in the image outside the tunnel are not very different in a short time, therefore, after clustering, a group of adjacent classes with the maximum Euclidean distance between the class centers of the adjacent classes is selected as an inner and outer group, although the large inner environment is single and the complexity is low, the class corresponding to the binary group with low texture complexity in the inner and outer groups is taken as a first class, and the class corresponding to the binary group with high texture complexity is taken as a second class.
The first category is marked as category A, and the corresponding binary group of the clustering center is
Figure 769921DEST_PATH_IMAGE011
The second class is marked as class B, and the corresponding two-tuple of the clustering center is
Figure 921679DEST_PATH_IMAGE012
Because the junction of the tunnel is changed continuously, the distribution of the characteristic binary groups corresponding to the images acquired in the period is scattered, clustering categories cannot be formed, and the images are continuously classifiedMIn the frame gray level image, the image set corresponding to the first category is a multi-frame tunnel internal image, the image set corresponding to the second category is a multi-frame tunnel external image which is most adjacent to the tunnel opening, and therefore, the continuous images between the first category and the second categoryNThe frame image is a tunnel boundary image, and the tunnel boundary image includes a tunnel interior and a tunnel portal.
It should be noted that the tunnel boundary image is a discrete point that cannot form a cluster type, and during the driving process, there may be a case where an individual image is abnormal, for example, a high beam driving opposite in a tunnel has a relatively large brightness at a certain moment, and a discrete point is formed, and a noise point is formed, so that the discrete point cannot be directly selected as the tunnel boundary image.
3. And acquiring the local uniformity according to the occurrence frequency of each pixel pair in the gray level co-occurrence matrix and the pixel difference of the pixel pair.
To a first order
Figure 407018DEST_PATH_IMAGE013
Taking a tunnel boundary image as an example, according to the gray level co-occurrence matrix corresponding to the image, calculating the local uniformity degree of the matrix
Figure 701733DEST_PATH_IMAGE014
Figure 129172DEST_PATH_IMAGE015
When the probability of occurrence of a pixel pair with a small difference in gray scale value in an image is higher, the more uniform the gray scale distribution in a small area is represented, that is, the degree of local uniformity
Figure 93193DEST_PATH_IMAGE014
The larger the value of (b) is, the more uniformly distributed the local area of the image is.
When the tunnel opening area is too bright, a white hole effect occurs, and besides enabling the whole tunnel opening area to be brighter, the white hole effect enables the tunnel inner area in the image to be too dark and texture information to be lost, namely the gray value change of the tunnel opening area is smaller, the gray value change in the tunnel inner area is also smaller, namely when the white hole effect occurs in the image, the gray value change of the local area of the image is more uniform.
Therefore, the larger the local uniformity, the stronger the white hole effect, and when the white hole effect in an image is more serious, the higher the possibility of potential safety hazard exists, the poorer the quality of the image.
Step S003, performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; and obtaining the image quality of the tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity.
The method comprises the following specific steps:
1. and performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel inner area in the tunnel boundary image.
And performing self-adaptive threshold segmentation on each tunnel boundary image to obtain a tunnel entrance region, wherein the rest regions are tunnel internal regions, so that the tunnel entrance region and the tunnel internal regions at the tunnel boundary are divided.
2. And acquiring the texture complexity of the areas of the tunnel mouth and the inner area of the tunnel, and further acquiring the first difference and the second difference.
And calculating a first difference between the texture complexity of the first region and the texture complexity corresponding to the class center of the first class by taking the region texture complexity of the inner region of the tunnel as the first region texture complexity.
And acquiring a gray level co-occurrence matrix corresponding to a tunnel internal area in the tunnel boundary image, and calculating the gray level entropy of the gray level co-occurrence matrix of the area to obtain the area texture complexity of the tunnel internal area as the first area texture complexity.
Calculating texture complexity of a first region
Figure 647802DEST_PATH_IMAGE016
A first difference between texture complexities corresponding to class centers of a first class
Figure 423122DEST_PATH_IMAGE017
And calculating a second difference between the texture complexity of the second region and the texture complexity corresponding to the class center of the second class by taking the region texture complexity of the tunnel portal region as the second region texture complexity.
And acquiring a gray level co-occurrence matrix corresponding to a tunnel portal area in the tunnel boundary image, and calculating the gray level entropy of the gray level co-occurrence matrix of the area to obtain the area texture complexity of the tunnel portal area as the second area texture complexity.
Calculating a second region texture complexityDegree of rotation
Figure 896829DEST_PATH_IMAGE018
A second difference between the complexity of the texture corresponding to the class center of the second class
Figure 928239DEST_PATH_IMAGE019
3. And acquiring the image quality of the tunnel boundary image.
And acquiring a second area proportion occupied by the tunnel inner area based on the first area proportion occupied by the tunnel opening area, and obtaining the image quality of the tunnel boundary image according to the first area proportion, the first difference, the second area proportion, the second difference and the local uniformity degree.
Also in the first place
Figure 286539DEST_PATH_IMAGE013
Taking a tunnel boundary image as an example, the specific calculation formula is as follows:
Figure 555846DEST_PATH_IMAGE020
wherein the content of the first and second substances,
Figure 112640DEST_PATH_IMAGE021
denotes the first
Figure 709974DEST_PATH_IMAGE013
The image quality of the image bordering the sheet tunnel,
Figure 855654DEST_PATH_IMAGE014
is shown as
Figure 448309DEST_PATH_IMAGE013
The degree of local uniformity of the sheet tunnel boundary image,
Figure 562021DEST_PATH_IMAGE022
the area of the tunnel portal area is indicated,
Figure 895919DEST_PATH_IMAGE023
is shown as
Figure 330443DEST_PATH_IMAGE013
The area of the image bordering the sheet tunnel,
Figure 308763DEST_PATH_IMAGE024
representing a first proportion of the area occupied by the tunnel portal area,
Figure 918342DEST_PATH_IMAGE025
representing a second proportion of the area occupied by the inner region of the tunnel,
Figure 755848DEST_PATH_IMAGE026
the natural logarithm with the natural constant e as the base number is shown, that is, the preset value in the embodiment of the present invention is the natural constant e.
In other embodiments, the preset value may also be other natural numbers greater than 1.
First area ratio
Figure 977751DEST_PATH_IMAGE024
Indicating the degree of influence of the tunnel portal region on the overall image quality.
When the vehicle-mounted camera is closer to the tunnel portal, the texture difference between the internal area of the tunnel boundary image and the internal image of the tunnel is larger, and the first difference is larger; the more similar the texture of the tunnel mouth region and the outside of the tunnel, the smaller the second difference; while the larger the first area proportion, the smaller the second area proportion.
As the negative exponent of the natural constant e, the closer to 0, the larger the exponential function result is, and the faster the increase speed is; the closer to 1, the smaller the exponential function result, the smaller the reduction speed, and therefore, when the second difference is smaller, the larger the obtained function result, the larger the corresponding coefficient,
Figure 810578DEST_PATH_IMAGE021
the larger the value, the better the image quality.
Similarly, the farther away from the tunnelWhen crossing, the smaller the first difference is, the larger the second area proportion corresponding to the first difference is, the obtained
Figure 108835DEST_PATH_IMAGE027
The greater the value of (a) is,
Figure 918791DEST_PATH_IMAGE021
the larger the value, the better the image quality.
When the vehicle-mounted camera is positioned in the middle of the junction of the tunnel, namely the moment when the white hole effect occurs, the first difference and the second difference are both larger, the corresponding coefficients are closer,
Figure 819751DEST_PATH_IMAGE027
and
Figure 382450DEST_PATH_IMAGE028
all have smaller values to obtain the final
Figure 835297DEST_PATH_IMAGE021
The smaller the value, the worse the image quality.
And step S004, evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary image of the adjacent frame tunnel and the corresponding frame number.
Specifically, a specific formula for evaluating the imaging quality of the vehicle-mounted camera is as follows:
Figure 647395DEST_PATH_IMAGE029
wherein Q represents the imaging quality of the vehicle-mounted camera,
Figure 240794DEST_PATH_IMAGE013
indicating the number of frames of the tunnel boundary image.
The image quality obtained in the step S003 is the objective image quality of each image, and for each vehicle-mounted camera, the larger the white hole effect is, the poorer the quality of the acquired image is, which is certain, but the vehicle-mounted camera with good imaging quality can adapt to light change as soon as possible to adjust the imaging quality.
For the process of calculating the imaging quality Q of the in-vehicle camera,
Figure 782634DEST_PATH_IMAGE013
the smaller the value of (c), the earlier the acquisition time is indicated,
Figure 422694DEST_PATH_IMAGE030
the rate of disappearance of the white hole effect is shown, and since the white hole effect is a gradual process of disappearance, it corresponds to
Figure 49984DEST_PATH_IMAGE014
Is gradually reduced, and the change to the direction of reducing the local uniformity degree is faster when the local uniformity degree in the adjacent frame images is shorter, namely, when the local uniformity degree is reduced
Figure 417381DEST_PATH_IMAGE013
The smaller the image is, the larger the local uniformity degree in the adjacent frame images is reduced, the higher the adjusting speed of the corresponding vehicle-mounted camera on light change is, the larger the imaging quality Q is, and the better the imaging quality is; when the white hole effect disappears, the local uniformity degree of the image is smaller, and the image quality is gradually increased until the image quality is stable, so that the local uniformity degree can be changed to the direction of better image quality at the earlier moment, namely at the moment
Figure 954672DEST_PATH_IMAGE013
When the image quality of the vehicle-mounted camera is smaller, the image quality in the adjacent frame images is increased more, the imaging quality Q is larger, and the imaging quality of the vehicle-mounted camera is better.
In summary, in the embodiment of the invention, the running process of the vehicle leaving the tunnel is taken as the test process, and the continuous collected by the vehicle-mounted camera in the test process is obtainedMImaging the image frame and converting the image frame into a corresponding gray image; obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value and the texture complexity of the gray level imageNFrame tunnel boundary images; obtaining boundary graph of each tunnelLocal degree of homogeneity of the image; performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel internal area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by the area of the tunnel port; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity; and evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number. The embodiment of the invention can evaluate the adaptive capacity of the vehicle-mounted camera under the condition of obvious light change, avoids the influence of an objective environment and improves the evaluation accuracy.
It should be noted that: the sequence of the above embodiments of the present invention is only for description, and does not represent the advantages or disadvantages of the embodiments. And specific embodiments thereof have been described above. Other embodiments are within the scope of the following claims. In some cases, the actions or steps recited in the claims may be performed in a different order than in the embodiments and still achieve desirable results. In addition, the processes depicted in the accompanying figures do not necessarily require the particular order shown, or sequential order, to achieve desirable results. In some embodiments, multitasking and parallel processing may also be possible or may be advantageous.
The embodiments in the present specification are described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; the modifications or substitutions do not make the essence of the corresponding technical solutions deviate from the technical solutions of the embodiments of the present application, and are included in the protection scope of the present application.

Claims (4)

1. The method for detecting the imaging quality of the vehicle-mounted camera is characterized by comprising the following steps:
taking the advancing process of the vehicle leaving the tunnel as a test process to acquire the continuity acquired by the vehicle-mounted camera in the test processMImaging the image frame and converting the image frame into a corresponding gray image;Mis a positive integer;
obtaining the texture complexity of each gray level image, and screening out continuous images according to the gray level mean value of the gray level image and the texture complexityNThe frame-tunnel boundary image is displayed,
Figure 772316DEST_PATH_IMAGE001
and is andNis a positive integer; acquiring the local uniformity degree of each tunnel boundary image;
performing self-adaptive segmentation on each tunnel boundary image to obtain a tunnel mouth area and a tunnel inner area in the tunnel boundary image; acquiring the texture complexity of the region of the tunnel mouth and the region inside the tunnel; calculating a first area proportion occupied by a tunnel mouth region; acquiring the image quality of a tunnel boundary image according to the local uniformity degree, the first area proportion and the area texture complexity;
evaluating the imaging quality of the vehicle-mounted camera according to the image quality difference and the local uniformity difference of the boundary images of the adjacent frame tunnels and the corresponding frame number;
the texture complexity obtaining method comprises the following steps:
acquiring a gray level co-occurrence matrix of each gray level image, calculating an entropy value of a gray level entropy of the gray level co-occurrence matrix, and taking a normalization result of the entropy value as the texture complexity;
the method for acquiring the local uniformity comprises the following steps:
and acquiring the local uniformity degree according to the occurrence frequency of each pixel pair in the gray level co-occurrence matrix and the pixel difference of the pixel pair.
2. The imaging quality detection method of the vehicle-mounted camera according to claim 1, wherein the screening process of the tunnel boundary image is as follows:
forming feature binary groups by the gray level mean value and the texture complexity, clustering all the feature binary groups to obtain a first category representing images inside the tunnel and a second category representing images outside the tunnel, and continuously classifying the first category and the second categoryNThe frame image is used as a tunnel boundary image.
3. The imaging quality detection method for the vehicle-mounted camera according to claim 2, wherein the first and second categories of obtaining methods are as follows:
and calculating Euclidean distance between the two tuples corresponding to the centers of the two classes for the two adjacent classes, taking the two adjacent classes with the maximum Euclidean distance as an inner group and an outer group, taking the class corresponding to the tuple with lower texture complexity in the inner group and the outer group as a first class, and taking the class corresponding to the tuple with higher texture complexity as a second class.
4. The imaging quality detection method of the vehicle-mounted camera according to claim 2, wherein the image quality of the tunnel boundary image is obtained by the following steps:
taking the region texture complexity of the inner region of the tunnel as a first region texture complexity, and calculating a first difference between the first region texture complexity and the texture complexity corresponding to the class center of the first class;
taking the region texture complexity of the tunnel portal region as a second region texture complexity, and calculating a second difference between the second region texture complexity and the texture complexity corresponding to the class center of the second class;
and acquiring a second area proportion occupied by the tunnel inner area based on the first area proportion occupied by the tunnel opening area, and obtaining the image quality of the tunnel boundary image according to the first area proportion, the first difference, the second area proportion, the second difference and the local uniformity degree.
CN202210752630.9A 2022-06-30 2022-06-30 Imaging quality detection method for vehicle-mounted camera Active CN114820623B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210752630.9A CN114820623B (en) 2022-06-30 2022-06-30 Imaging quality detection method for vehicle-mounted camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210752630.9A CN114820623B (en) 2022-06-30 2022-06-30 Imaging quality detection method for vehicle-mounted camera

Publications (2)

Publication Number Publication Date
CN114820623A CN114820623A (en) 2022-07-29
CN114820623B true CN114820623B (en) 2022-09-09

Family

ID=82523096

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210752630.9A Active CN114820623B (en) 2022-06-30 2022-06-30 Imaging quality detection method for vehicle-mounted camera

Country Status (1)

Country Link
CN (1) CN114820623B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115379208B (en) * 2022-10-19 2023-03-31 荣耀终端有限公司 Camera evaluation method and device
CN116030633B (en) * 2023-02-21 2023-06-02 天津汉云工业互联网有限公司 Vehicle tunnel early warning method and device
CN116668667B (en) * 2023-04-11 2024-06-14 深圳市龙之源科技股份有限公司 PIR recovery time testing device

Family Cites Families (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110457781B (en) * 2019-07-24 2022-12-23 中南大学 Passenger comfort-oriented train tunnel-passing time length calculation method
CN112468802A (en) * 2020-11-04 2021-03-09 安徽江淮汽车集团股份有限公司 Camera homogeneity test auxiliary device
CN112581440B (en) * 2020-12-10 2023-07-14 合肥英睿系统技术有限公司 Method and device for maintaining image quality of vehicle-mounted camera and vehicle-mounted camera
CN114785960B (en) * 2022-06-16 2022-09-02 鹰驾科技(深圳)有限公司 360 degree panorama vehicle event data recorder system based on wireless transmission technology

Also Published As

Publication number Publication date
CN114820623A (en) 2022-07-29

Similar Documents

Publication Publication Date Title
CN114820623B (en) Imaging quality detection method for vehicle-mounted camera
CN112132156B (en) Image saliency target detection method and system based on multi-depth feature fusion
US9384401B2 (en) Method for fog detection
US6690011B2 (en) Infrared image-processing apparatus
JP4942510B2 (en) Vehicle image recognition apparatus and method
US11700457B2 (en) Flicker mitigation via image signal processing
CN114820773B (en) Silo transport vehicle carriage position detection method based on computer vision
DE4410064A1 (en) Method and system for distance recognition
CN112731436B (en) Multi-mode data fusion travelable region detection method based on point cloud up-sampling
US20200125869A1 (en) Vehicle detecting method, nighttime vehicle detecting method based on dynamic light intensity and system thereof
CN108830131B (en) Deep learning-based traffic target detection and ranging method
CN109318799B (en) Automobile, automobile ADAS system and control method thereof
CN115661669A (en) Method and system for monitoring illegal farmland occupancy based on video monitoring
CN113011255B (en) Road surface detection method and system based on RGB image and intelligent terminal
CN116342440A (en) Vehicle-mounted video monitoring management system based on artificial intelligence
CN107564041B (en) Method for detecting visible light image aerial moving target
US5878163A (en) Likelihood-based threshold selection for imaging target trackers
CN115760826A (en) Bearing wear condition diagnosis method based on image processing
CN113674231B (en) Method and system for detecting iron scale in rolling process based on image enhancement
CN111242051B (en) Vehicle identification optimization method, device and storage medium
CN114882387B (en) Bearing raceway bruise identification and automatic polishing positioning method in grinding process
CN116469061A (en) Highway obstacle detection and recognition method
CN112949423B (en) Object recognition method, object recognition device and robot
CN113139488B (en) Method and device for training segmented neural network
CN109190577B (en) Image signal detection method for taxi passenger seat abnormity by combining headrest and human face characteristics

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant
PE01 Entry into force of the registration of the contract for pledge of patent right

Denomination of invention: A Method for Detecting the Imaging Quality of Car Cameras

Effective date of registration: 20230323

Granted publication date: 20220909

Pledgee: China Construction Bank Corporation Weishan sub branch

Pledgor: Luran Optoelectronics (Weishan) Co.,Ltd.

Registration number: Y2023980036085

PE01 Entry into force of the registration of the contract for pledge of patent right
PC01 Cancellation of the registration of the contract for pledge of patent right

Granted publication date: 20220909

Pledgee: China Construction Bank Corporation Weishan sub branch

Pledgor: Luran Optoelectronics (Weishan) Co.,Ltd.

Registration number: Y2023980036085

PC01 Cancellation of the registration of the contract for pledge of patent right