CN117893755A - Camera shielding detection method, vehicle-mounted system, vehicle and storage medium - Google Patents

Camera shielding detection method, vehicle-mounted system, vehicle and storage medium Download PDF

Info

Publication number
CN117893755A
CN117893755A CN202311760196.XA CN202311760196A CN117893755A CN 117893755 A CN117893755 A CN 117893755A CN 202311760196 A CN202311760196 A CN 202311760196A CN 117893755 A CN117893755 A CN 117893755A
Authority
CN
China
Prior art keywords
image
determining
visual perception
vehicle
gray
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311760196.XA
Other languages
Chinese (zh)
Inventor
周亚振
余士超
陈豪
许成军
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Zero Run Technology Co Ltd
Original Assignee
Zhejiang Zero Run Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Zero Run Technology Co Ltd filed Critical Zhejiang Zero Run Technology Co Ltd
Priority to CN202311760196.XA priority Critical patent/CN117893755A/en
Publication of CN117893755A publication Critical patent/CN117893755A/en
Pending legal-status Critical Current

Links

Abstract

The invention relates to the technical field of camera detection, in particular to a detection method for camera shielding, a vehicle-mounted system, a vehicle and a storage medium. By acquiring the visual perception image, determining the region of interest in the visual perception image and performing the next processing of the image, the determination of the region of interest only requires the calculation of the region with the target object information in the visual perception image, so as to reduce the calculation amount of the whole visual perception image. And then, calculating the gray average value of the region of interest, and determining that the video image is in a shielding state when the gray average value is out of the gray average value threshold range. Through the design, the video image can be determined to be in the shielding state, the shielding state of the video image can be accurately judged while the calculated amount is less, so that the vehicle can be continuously controlled by the auxiliary driving under the condition that the vehicle camera is shielded, and the running danger of the vehicle is reduced.

Description

Camera shielding detection method, vehicle-mounted system, vehicle and storage medium
Technical Field
The invention relates to the technical field of camera detection, in particular to a detection method for camera shielding, a vehicle-mounted system, a vehicle and a storage medium.
Background
The auxiliary driving senses the environmental information around and in front of the vehicle body through a plurality of cameras on the vehicle, and further realizes driving control of the vehicle. However, when the camera on the vehicle is blocked, the vehicle cannot sense the surrounding environment, and if the vehicle is still controlled at this time, the risk of the vehicle running will be raised, so detecting whether the camera on the vehicle is blocked is a technical problem to be solved.
Disclosure of Invention
The invention aims to provide a detection method for camera shielding, a vehicle-mounted system, a vehicle and a storage medium, and whether a camera is shielded or not on the vehicle can be detected.
The application provides a detection method for camera shielding, which comprises the following steps: the method comprises the steps of obtaining a visual perception image, wherein the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image; determining a region of interest in the visual perception image; determining a gray average value of the region of interest; and when the gray average value is out of the gray average value threshold range, determining that the video image is in an occlusion state.
In an exemplary embodiment of the present application, vanishing point coordinates in the visually perceived image are obtained, and an upper boundary of the visually perceived image is determined according to the vanishing point coordinates; taking a line which passes through the upper end of an ineffective area in parallel along the X axis direction in the visual perception image as a lower boundary, wherein the ineffective area comprises an area shielded by a vehicle engine cover; and determining a region of interest according to the upper boundary and the lower boundary.
In an exemplary embodiment of the present application, the step of obtaining vanishing point coordinates in the visually-perceived image and determining an upper boundary of the visually-perceived image according to the vanishing point coordinates includes: when a target object exists in the visual perception image, acquiring vanishing point coordinates of different target objects, obtaining a coordinate mean value according to the vanishing point coordinates of a plurality of target objects, and taking a line which passes through the coordinate mean value in parallel along the X axis as an upper boundary; when no target object exists in the visual perception image, presetting vanishing point coordinates, and taking a line parallel to the X axis and passing through the target object vanishing point coordinates as an upper boundary after upwards translating.
In an exemplary embodiment of the present application, when the gray average value is outside a threshold range of the gray average value, the step after determining that the video image is in the occlusion state includes: when the gray average value is within the gray average value threshold value range, calculating gray level gradient of the region of interest; and when the gray level is smaller than a gray level threshold, determining that the video image is in a shielding state.
In an exemplary embodiment of the present application, when the gray level is less than the gray level threshold, the step after determining that the video image is in the blocking state includes: acquiring the visual perception image in a period of time; when the number of the non-shielding states in the visual sense images is smaller than the number of shielding states, determining that the visual sense images in the non-shielding states are invalid visual sense images, and determining that the visual sense images in the time period are shielding states; and when the number of the shielding states in the plurality of the visual sense images is smaller than the number of the non-shielding states, determining that the visual sense images in the shielding states are invalid visual sense images, and determining that the visual sense images in the time period are the non-shielding states.
In an exemplary embodiment of the present application, the step after determining the region of interest in the visual perception image further includes: calculating a gray standard deviation value of the region of interest; when the gray average value is within the gray average value threshold value range, calculating gray level gradient of the region of interest; and when the gray level is smaller than a gray level threshold and the standard deviation value is smaller than a standard deviation value threshold, determining that the video image is in a shielding state.
In an exemplary embodiment of the present application, the step of determining a gray scale average value of the region of interest includes: acquiring the number of pixels of the image of interest; determining a gray value of each pixel; and obtaining the gray average value according to the gray values.
In an exemplary embodiment of the present application, the step of acquiring a visual perception image includes: acquiring the target object types in the video image acquired by the camera to obtain the number of the target objects; and when the number of the target objects in different categories is greater than or equal to a corresponding preset threshold value, determining an interested area in the visual perception image.
In an exemplary embodiment of the present application, the step before the step of acquiring the visual perception image includes: acquiring the current state of the vehicle and the starting state of the auxiliary driving function; when the current state of the vehicle is a running state and the auxiliary driving function is a starting state, a visual perception image of the camera is acquired.
The application also provides a vehicle-mounted system for executing the detection method of camera shielding, the vehicle-mounted system comprises: the acquisition module is used for acquiring a visual perception image, wherein the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image; the region selection module is used for determining a region of interest in the visual perception image; the calculation module is used for determining the gray average value of the region of interest; and the comparison module is used for comparing the gray average value with an average value threshold range, and determining that the video image is in a shielding state when the gray average value is out of the average value threshold range.
The application also provides a vehicle, which comprises the vehicle-mounted system.
The present application also provides a computer readable storage medium having stored thereon program instructions which, when executed by a processor, implement the above-described method.
The detection method, the vehicle-mounted system, the vehicle and the storage medium for camera shielding have the following beneficial effects: by acquiring the visual perception image, determining a region of interest in the visual perception image and performing the next processing of the image, and calculating the region with target object information in the visual perception image after determining the region of interest, the calculation amount of the whole visual perception image is reduced. And then, calculating the gray average value of the region of interest, and determining that the video image is in a shielding state when the gray average value is out of the gray average value threshold range. Through the design, the video image can be determined to be in the shielding state, the shielding state of the video image can be accurately judged while the calculated amount is less, so that the vehicle can be continuously controlled by the auxiliary driving under the condition that the vehicle camera is shielded, and the running danger of the vehicle is reduced.
Other features and advantages of the present application will be apparent from the following detailed description, or may be learned in part by the practice of the application.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the application.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the application and together with the description, serve to explain the principles of the application. It is apparent that the drawings in the following description are only some embodiments of the present application, and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
FIG. 1 is a flow chart of a method for detecting a camera according to an embodiment of the invention;
FIG. 2 is a flowchart illustrating an embodiment of step S400 in FIG. 1;
FIG. 3 is a schematic illustration of the upper and lower boundaries of the video image of FIG. 2;
FIG. 4 is a flowchart illustrating an embodiment of step S410 in FIG. 2;
FIG. 5 is a flowchart illustrating an embodiment of step S600 in FIG. 1;
FIG. 6 is a flowchart illustrating an embodiment after the step S600 in FIG. 1;
FIG. 7 is a flowchart illustrating another embodiment after the step S400 in FIG. 1;
FIG. 8 is a flowchart illustrating an embodiment of the step S100 in FIG. 1;
FIG. 9 is a schematic diagram of a vehicle-mounted system according to an embodiment of the present invention;
FIG. 10 is a schematic view of a vehicle according to an embodiment of the present invention;
fig. 11 is a schematic diagram of a storage medium according to an embodiment of the present invention.
Detailed Description
Example embodiments will now be described more fully with reference to the accompanying drawings. However, the exemplary embodiments may be embodied in many forms and should not be construed as limited to the examples set forth herein; rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the concept of the example embodiments to those skilled in the art.
Furthermore, the described features, structures, or characteristics may be combined in any suitable manner in one or more embodiments. In the following description, numerous specific details are provided to give a thorough understanding of embodiments of the present application. One skilled in the relevant art will recognize, however, that the aspects of the application can be practiced without one or more of the specific details, or with other methods, components, devices, steps, etc. In other instances, well-known methods, devices, implementations, or operations are not shown or described in detail to avoid obscuring aspects of the application.
The present application is further described in detail below with reference to the drawings and specific examples. It should be noted that the technical features of the embodiments of the present application described below may be combined with each other as long as they do not collide with each other. The embodiments described below by referring to the drawings are exemplary and intended for the purpose of explaining the present application and are not to be construed as limiting the present application.
It should be noted that: references herein to "a plurality" means two or more. "and/or" describes an association relationship of an association object, meaning that there may be three relationships, e.g., a and/or B may represent: a exists alone, A and B exist together, and B exists alone. The character "/" generally indicates that the context-dependent object is an "or" relationship.
The auxiliary driving system senses environmental information around and in front of the vehicle body through a plurality of cameras on the vehicle, and further achieves driving control of the vehicle. However, when the camera on the vehicle is blocked, the vehicle cannot sense the surrounding environment, and if the vehicle is still controlled at this time, the risk of the vehicle running will be raised, so detecting whether the camera on the vehicle is blocked is a technical problem to be solved.
The auxiliary driving system utilizes various sensors (millimeter wave radar, laser radar, single/double camera and satellite navigation) arranged on the vehicle to sense surrounding environment at any time in the running process of the vehicle, collects data, performs identification, detection and tracking of static and dynamic objects, and performs systematic operation and analysis by combining navigation map data, thereby enabling a driver to perceive possible danger in advance and effectively increasing the comfort and safety of the driving of the vehicle. Wherein, the single/double-eye camera belongs to the category of cameras.
In order to solve the above-mentioned technical problems, the present application provides a method for detecting camera occlusion, as shown in fig. 1, fig. 1 is a flow chart of a method for detecting camera in an embodiment of the present invention, specifically, the method includes steps S300 to S600 as follows.
Step S300: and acquiring a visual perception image, wherein the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image.
The vehicle-mounted system can analyze the visual perception image by acquiring the visual perception image. The visual perception image is obtained by processing a video image acquired by a camera, wherein the target object comprises an obstacle, a lane line, a street lamp, a traffic sign, a traffic signal lamp and the like. The target object also comprises a drivable area, trees, flower beds and the like, and the target object is subjected to single analysis or multiple combination analysis to be in a shielding state or a non-shielding state.
Step S400: a region of interest is determined in the visual perception image.
The in-vehicle system determines a region of interest in the visually perceived image, which may be a point, line, surface irregular shape, as a sample of image classification, mask, crop area, or other operation. The method is mainly used for executing calculation samples of gray average values, gray gradients, gray standard deviation values and the like.
Step S500: a gray-scale average of the region of interest is determined.
The vehicle-mounted system determines whether the video image is in a shielding state or a non-shielding state according to the gray average value of the region of interest, wherein the shielding state mainly means that the video image collected by the camera cannot show any one or more target objects, and the non-shielding state means that the video image collected by the camera can show any one or more target objects. Wherein whether the target object is present is determined based on a comparison with some a priori knowledge.
The gray average value is obtained by acquiring the number of pixels of the interested image through the vehicle-mounted system, determining the gray value of each pixel, and obtaining the gray average value according to the gray values. The gray average value is calculated to satisfy the following formula:
where mean represents the gray average and n represents the number of pixels in the region of interest. i denotes the ith pixel of the region of interest, pix i A pixel value representing the i-th pixel of the region of interest. Wherein n, i and pix i Is a known value, and thus a gray average value is obtained by the above equation.
Step S600: and when the gray average value is out of the gray average value threshold range, determining that the video image is in an occlusion state.
The gray mean threshold range is a numerical range selected according to a priori knowledge, optionally, the gray mean threshold range is (Tminmean, tmaxmean), tminmean represents a minimum threshold, tmaxmean represents a maximum threshold, and exemplary, the minimum threshold Tminmean is 30, and the maximum threshold Tmaxmean is 220. When the gray average value is out of the gray average value threshold range, that is, the gray average value is greater than 30 or less than 220, the video image can be in the condition of overexposure or complete shielding of the camera, and the output result of the camera is in a shielding state. When the camera is in a shielding state, the auxiliary driving system of the vehicle stops controlling the vehicle, so that the running danger of the vehicle is reduced.
In this embodiment, the vehicle-mounted system processes the video image acquired by the camera to obtain a visual perception image, determines a calculation sample of the gray average value, the gray gradient, the gray standard deviation value and the like of the region of interest in the visual perception image, and determines whether the video image is in a shielding state or in a non-shielding state according to the gray average value by determining the gray average value of the region of interest. When the gray average value is out of the gray average value threshold range, the video image can be in the condition of overexposure or complete shielding of the camera, and the output result of the camera is in a shielding state. When the camera is in a shielding state, the auxiliary driving system of the vehicle stops controlling the vehicle, so that the running danger of the vehicle is reduced.
Referring to fig. 1, in step S300, a visual perception image is acquired, the visual perception image is processed by a video image acquired by a camera, and step S100 and step S200 are further included before the visual perception image is used to represent target object information in the video image.
Step S100: the current state of the vehicle and the start state of the auxiliary driving function are acquired.
The current state of the vehicle includes a running state and a stopped state, and the activated state of the auxiliary driving function includes the auxiliary driving function of the vehicle being in an activated state and an inactivated state.
Step S200: when the current state of the vehicle is a running state and the auxiliary driving function is a starting state, a visual perception image of the camera is acquired.
When the vehicle is in a stopped state, the method in the application does not need to detect the vehicle in the stopped state because there is no running risk. When the auxiliary driving function of the vehicle is in an inactive state, it is indicated that the vehicle is driven manually, and the auxiliary driving cannot interfere with the manual driving at this time, so that the risk of driving is controlled manually, i.e., the manual driving has a higher priority than the auxiliary driving. Thus, if the current state of the vehicle is a stopped state and/or the driving support function is an inactive state, it is not necessary to detect whether the camera system is blocked. When the current state of the vehicle is a running state and the auxiliary driving function is a starting state, a visual perception image of the camera is acquired to perform shielding detection of the camera system.
Referring to fig. 2, fig. 2 is a flowchart illustrating an embodiment of step S400 in fig. 1, where step S400 determines a region of interest in a visual perception image, and specifically includes the following steps S410 to S430.
Step S410: and acquiring vanishing point coordinates in the visual perception image, and determining the upper boundary of the visual perception image according to the vanishing point coordinates.
The vanishing point is the position of the intersection point in the image, which is the vanishing point, in contrast to infinity, due to the perspective projection relationship of the camera. The vanishing point, which can be calculated by means of different parallel lines, is most simply calculated by means of a lane line on the ground, and is usually located near the center of the field of view of the image. The upper boundary of the visual perception image determined by the vanishing point coordinates can be selected to ensure that the area of the region of interest is larger, and the accuracy of calculating the region of interest is ensured.
Step S420: a line parallel to the X-axis passing through the upper end of an ineffective area including an area blocked by the vehicle hood is taken as a lower boundary in the visual perception image.
As shown in fig. 3, in the process of capturing a video image, a vehicle front camera may capture a vehicle hood in the video image, and the vehicle hood is a boundary area obtained by dividing a visual perception image. And taking a line which passes through the upper end of the invalid region in parallel along the X-axis direction as a lower boundary, and taking the boundary region obtained by segmentation as the invalid region, thereby reducing the analysis amount of the visual perception image. In some embodiments, the selection of the invalid area is also considered according to practical situations, for example, the side camera may need to take the area of a part of the vehicle door as the invalid area, and the tail camera may need to take the area of a part of the tail door as the invalid area.
Step S430: the region of interest is determined from the upper and lower boundaries.
The vehicle-mounted system determines the upper boundary of the visual perception image through vanishing point coordinates and determines the interested area by taking a line which passes through the upper end of the invalid area in parallel along the X axis as the lower boundary, and reduces the analysis amount of the visual perception image through the upper boundary and the area outside the lower boundary, so that the occlusion state of the camera system can be reflected more quickly.
Referring to fig. 4, fig. 4 is a flowchart illustrating an embodiment of step S410 in fig. 2, where step S410 obtains vanishing point coordinates in the visual perception image, and determines an upper boundary of the visual perception image according to the vanishing point coordinates, and specifically includes the following steps S411 and S412.
There may be a target object and no target object in the visual sense image, wherein step S411 and step S412 will be described for both cases. When the visual perception image has a target object, the target object is analyzed, and when the visual perception image does not have the target object, the preset vanishing point coordinates in the vehicle-mounted system are analyzed, and the preset vanishing point coordinates are obtained according to priori knowledge.
Step S411: when the target objects exist in the visual perception image, vanishing point coordinates of different target objects are obtained, a coordinate mean value is obtained according to the vanishing point coordinates of a plurality of target objects, and a line which passes through the coordinate mean value in parallel along the X axis is taken as an upper boundary.
When the target object is in the visual perception image, the method in step S410 is performed, and details are not described here.
Step S412: when no target object exists in the visual perception image, the vanishing point coordinates are preset, and a line passing through the target object vanishing point coordinates in parallel along the X axis is translated upwards to serve as an upper boundary.
When no target object exists in the visual perception image, preset vanishing point coordinates are virtual coordinates according to prior, and the coordinates do not exist actually but can be used as an analysis object. Enabling a higher accessibility for subsequent analysis, then according to a line passing parallel to the X-axis through the coordinates of the vanishing point of the objectThe region of interest selected after translation also has higher referenceability. Further, the coordinate of the vanishing point is preset to be p vanish (x, y), the upper boundary of the region of interest is y-0.25 x height, height being the high of the current camera resolution. The lower boundary of the region of interest is a preset boundary of an invalid region which is shielded (such as a region where the camera is shielded by a head hood), and the value can be obtained through the visual perception result of a large amount of offline data because the installation position of the camera relative to the vehicle is basically unchanged. Assuming that the boundary is y_invalid, the upper and lower boundaries of the region of interest are (y-0.25 x height, y_invalid), respectively.
Referring to fig. 5, fig. 5 is a flowchart of an embodiment of step S600 in fig. 1, where step S600 includes the following steps S610 and S620 after determining that the video image is in an occlusion state when the gray average value is out of the threshold range of the gray average value.
Step S610: and when the gray average value is within the gray average value threshold value range, calculating the gray gradient of the region of interest.
When the gray average value is within the gray average value threshold range, that is, the gray average value is between 30 and 220. And if the shielding state of the camera system cannot be accurately analyzed through the gray average value, further calculating the gray gradient of the region of interest.
The gray gradient can be obtained by convolving the image by a Soble operator or a Laplace operator. The third-order Sobel operator satisfies the following equation in the X-axis direction:
and transposing the third-order Sobel operator in the X-axis direction in the y-axis direction to obtain the target. When calculating the image gradient, the third-order Sobel operator is used for convolving the image (comprising the x-axis direction and the y-axis direction) to obtain the component G of the image in the x-axis direction and the y-axis direction X And G y The magnitude of the gray scale gradient is then calculated. Wherein the magnitude of the image gradient satisfies the following equation:
G X =max(0,min(|G X |,255))
G Y =max(0,min(|G Y |,255))
wherein grad is a gray scale gradient.
Step S620: and when the gray level is smaller than the gray level threshold, determining that the video image is in a shielding state.
The gray level threshold is a numerical range selected according to priori knowledge, and is set according to actual conditions. When the gray level gradient is beyond the gray level threshold, the output result of the direct camera is in a shielding state. When the camera is in a shielding state, the auxiliary driving system of the vehicle stops controlling the vehicle, so that the running danger of the vehicle is reduced. Alternatively, the threshold of the gray-scale gradient may be 30.
Referring to fig. 6, fig. 6 is a flowchart illustrating an embodiment after step S600 in fig. 1, and step S620 includes the following steps S710 to S730 after determining that the video image is in the shielding state when the gray level is less than the gray level threshold.
Step S710: a visually perceived image over a period of time is acquired.
The visual perception images acquired by the vehicle-mounted system in a period t are S1, S2 … … and St. The visual perception images S1, S2 … … and St within a period t are subjected to filtering processing, wherein the filtering processing satisfies the following formula:
wherein t represents a time period, i represents an ith visually perceived image, S i Representing the i-th visually perceived image. Wherein t, i and S i Is a known value, and thus is subjected to a filtering process by the above equation.
Step S720: when the number of non-shielding states in the plurality of visual sense images is smaller than the number of shielding states, determining that the visual sense images in the non-shielding states are invalid visual sense images, and determining that the visual sense images in the time period are shielding states.
When the number of non-occlusion states in the plurality of visually perceived images is less than the number of occlusion states, the visually perceived images in the non-occlusion states are confirmed as abnormal frames. Wherein the number of visually perceived images of the anomaly frame over a period of time is small in the ratio of all visually perceived images. And determining the visual perception image in the non-shielding state as an invalid visual perception image, screening the invalid visual perception image, and outputting a result of the visual perception image in the last time period as the shielding state.
Step S730: when the number of the shielding states in the plurality of visual sense images is smaller than the number of the non-shielding states, the visual sense images in the shielding states are determined to be invalid visual sense images, and the visual sense images in the time period are in the non-shielding states.
When the number of occlusion states in the plurality of visually perceived images is less than the number of non-occlusion states, the visually perceived image of the occlusion states is identified as an outlier frame. And determining that the visual perception image in the shielding state is an invalid visual perception image, screening the invalid visual perception image, and outputting a result of the visual perception image in the last time period to be in a non-shielding state.
Referring to fig. 7, fig. 7 is a schematic flow chart of another embodiment after step S400 in fig. 1, where step S400 specifically includes the following steps S510 and S520 after determining a region of interest in a visual perception image.
Step S510: and calculating the gray standard deviation value of the region of interest.
And the vehicle-mounted system judges whether the video image is in a shielding state or in a non-shielding state according to the standard deviation value by calculating the gray standard deviation value of the region of interest. And judging the shielding state of the visual perception image by combining the gray standard deviation value on the basis of calculating the gray average value, so that the accuracy of judging the visual perception image can be improved. Optionally, according to the actual situation, the shielding state of the visual perception image can be judged at least through the gray average value.
The gray standard deviation value calculation satisfies the following formula:
where std represents the gray standard deviation and n represents the number of pixels of the region of interest. i denotes the ith pixel of the region of interest, pix i A pixel value representing the i-th pixel of the region of interest. Wherein n, i and pix i Is a known value, and thus the gray standard deviation value is obtained by the above equation.
Step S520: and when the gray average value is within the gray average value threshold value range, calculating the gray gradient of the region of interest.
When the gray average value is within the gray average value threshold range, that is, the gray average value is between 30 and 220. And if the shielding state of the camera system cannot be accurately analyzed through the gray average value, further calculating the gray gradient of the region of interest.
Step S530: and when the gray level is smaller than the gray level threshold value and the standard deviation value is smaller than the standard deviation value threshold value, determining that the video image is in a shielding state.
The standard deviation threshold is one of a priori knowledge, optionally one between a maximum threshold and a minimum threshold, tminstd is 20 and Tmaxstd is 30. Alternatively, the standard deviation threshold may be 20, and when the gray standard deviation is smaller than the standard deviation threshold, that is, the gray standard deviation is smaller than 20 and the gray step is smaller than 30, the output result of the camera is in a blocking state. When the camera is in a shielding state, the auxiliary driving system of the vehicle stops controlling the vehicle, so that the running danger of the vehicle is reduced.
Referring to fig. 8, fig. 8 is a flowchart illustrating an embodiment of step S100 in fig. 1, where step S100 acquires a visual perception image, and specifically includes the following steps S110 to S130.
Step S110: and obtaining the types of the target objects in the video images acquired by the camera, and obtaining the number of the target objects.
And a plurality of types of target objects, such as the number of obstacles, the length of lane lines, the number of street lamps, the number of traffic signs, the number of traffic lights and the like, can be set in the vehicle-mounted system according to priori knowledge, so that the number of each type of target objects is obtained.
Step S120: if the number of the target objects in different categories is greater than or equal to the corresponding preset threshold value, the region of interest is determined in the visual perception image.
The preset threshold value of the corresponding target object can be set in the vehicle-mounted system according to priori knowledge, and the preset threshold value of the target object can be different under different vehicle body signals. For example, in a daytime urban road, the preset threshold value of the number of obstacles may be set to 10, the preset threshold value of the lane line length is set to 50cm, the preset threshold value of the number of street lamps is set to 8, the preset threshold value of the number of traffic signs and the preset threshold value of the number of traffic lights are set to 5. Comparing the number of obstacles, the length of lane lines, the number of street lamps, the number of traffic signs and the number of traffic lights with the preset threshold value so as to determine whether the video image of the camera is in a shielding state. If the number of the target objects in different categories is greater than or equal to the corresponding preset threshold value, the visual perception image is in a non-shielding state, and the region of interest is determined in the visual perception image. For example, when the number of the street lamps in the video image is greater than or equal to 8, determining the region of interest in the visual perception image to further analyze the shielding state of the video image.
In this embodiment, the first result is the determination result in step S120 and the second result is the determination result in step S300 to step S600, so that the occlusion state of the camera can be primarily determined in step S110 to step S120. The first result and the second result are combined and judged, so that the shielding result has higher accuracy. The use of one of the first and second results as the final result can have a higher efficiency but a higher probability of erroneous judgment. And the selection can be performed according to actual conditions.
In some embodiments, when the first result is taken as the final result, the first result may be subjected to the filtering processing in steps S710 to S730, so that the first result may be primarily determined to be in a non-shielding state, and driving safety may be primarily ensured.
In the application, when a vehicle is in a driving state and an auxiliary driving function is started, a vehicle-mounted system obtains a visual perception image after processing a target object through a video image acquired by a camera, a region of interest is determined in the visual perception image, then calculation of a gray level average value, a gray level gradient, a gray level standard deviation value and the like is performed, and the video image is judged to be in a shielding state or a non-shielding state according to the gray level average value by determining the gray level average value of the region of interest. And judging on the basis of the gray level average value by combining the gray level gradient and the gray standard deviation value, thereby further determining whether the shielding state is a non-shielding state or a shielding state. And the accuracy of judging the shielding state is further increased by the comparison of the number of the target objects in the auxiliary video image. By the method, accurate judgment of the shielding state can be obtained, and therefore the risk of driving is reduced.
Referring to fig. 9, fig. 9 is a schematic structural diagram of an in-vehicle system according to an embodiment of the present invention. The application also provides a vehicle-mounted system 10, which comprises an acquisition module 11, a display module and a display module, wherein the acquisition module 11 is used for acquiring a visual perception image, the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image; a region selection module 12 for determining a region of interest in the visual perception image; a calculation module 13, configured to determine a gray average value of the region of interest; and the comparison module 14 is used for comparing the gray average value with a mean value threshold range, and determining that the video image is in an occlusion state when the gray average value is out of the mean value threshold range. The shielding state of the camera can be accurately and rapidly judged, and the driving safety is improved.
In this embodiment, each module in the vehicle-mounted system 10 shown in fig. 8 may be combined into one or several units, or some (some) of the units may be further split into multiple sub-units with smaller functions, so that the same operation can be implemented without affecting the implementation of the technical effects of the embodiments of the present application. The above modules are divided based on logic functions, and in practical applications, the functions of one module may be implemented by a plurality of units, or the functions of a plurality of modules may be implemented by one unit. In other embodiments of the present application, the in-vehicle system 10 may also include other units, and in actual practice, these functions may also be implemented with assistance from other units, and may be implemented by cooperation of multiple units.
Referring to fig. 10, fig. 10 is a schematic view of a vehicle according to an embodiment of the present invention. The application also provides a vehicle 20, the vehicle 20 comprising the vehicle-mounted system 10. The vehicle 20 having the above-described vehicle-mounted system 10 can be more secure, and will not be described in detail herein.
Referring to fig. 11, fig. 11 is a schematic diagram illustrating a structure of a storage medium according to an embodiment of the present invention. The present application also provides a computer readable storage medium 30 having stored thereon a computer program 31, the program instructions being executable by a processor to perform the above-described method, enabling to achieve the advantageous effects of the above-described method.
In this embodiment, the computer readable storage medium 30 may be a medium such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk or an optical disc, which may store program instructions, or may be a server storing the program instructions, and the server may send the stored program instructions to other devices for execution, or may also self-execute the stored program instructions.
In this application, unless explicitly stated and limited otherwise, the terms "disposed," "connected," and the like are to be construed broadly, and may be, for example, fixedly connected, detachably connected, or integrally formed. Either mechanically or electrically. Can be directly connected or indirectly connected through an intermediate medium, and can be communicated with the inside of two elements or the interaction relationship of the two elements. The specific meaning of the terms in this application will be understood by those of ordinary skill in the art as the case may be.
In the description of the present specification, reference to the term "some embodiments" or the like means that a particular feature, structure, material, or characteristic described in connection with the embodiment is included in at least one embodiment of the present application. In this specification, schematic representations of the above terms are not necessarily directed to the same embodiment or example. Furthermore, the particular features, structures, materials, or characteristics described may be combined in any suitable manner in any one or more embodiments or examples. Furthermore, the different embodiments or examples described in this specification and the features of the different embodiments or examples may be combined and combined by those skilled in the art without contradiction.
While embodiments of the present application have been shown and described, it should be understood that the above embodiments are illustrative and not to be construed as limiting the application, and that variations, modifications, alternatives and variations may be made to the embodiments by one of ordinary skill in the art within the scope of the application, and therefore all changes and modifications that fall within the spirit and scope of the invention as defined by the claims and the specification of the application are intended to be covered thereby.

Claims (12)

1. A method for detecting occlusion of a camera, comprising:
the method comprises the steps of obtaining a visual perception image, wherein the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image;
determining a region of interest in the visual perception image;
determining a gray average value of the region of interest;
and when the gray average value is out of the gray average value threshold range, determining that the video image is in an occlusion state.
2. The method of detection according to claim 1, wherein the step of determining a region of interest in the visually perceived image comprises:
acquiring vanishing point coordinates in the visual perception image, and determining an upper boundary of the visual perception image according to the vanishing point coordinates;
taking a line which passes through the upper end of an ineffective area in parallel along the X axis direction in the visual perception image as a lower boundary, wherein the ineffective area comprises an area shielded by a vehicle engine cover;
and determining a region of interest according to the upper boundary and the lower boundary.
3. The method according to claim 2, wherein the step of acquiring vanishing point coordinates in the visually-perceived image and determining an upper boundary of the visually-perceived image from the vanishing point coordinates includes:
when a target object exists in the visual perception image, acquiring vanishing point coordinates of different target objects, obtaining a coordinate mean value according to the vanishing point coordinates of a plurality of target objects, and taking a line which passes through the coordinate mean value in parallel along the X axis as an upper boundary;
when no target object exists in the visual perception image, presetting vanishing point coordinates, and taking a line parallel to the X axis and passing through the target object vanishing point coordinates as an upper boundary after upwards translating.
4. The method according to claim 1, wherein the step of determining that the video image is in an occlusion state when the gray average value is out of a threshold range of gray average values comprises:
when the gray average value is within the gray average value threshold value range, calculating gray level gradient of the region of interest;
and when the gray level is smaller than a gray level threshold, determining that the video image is in a shielding state.
5. The method according to claim 4, wherein the step of determining that the video image is in an occlusion state when the gray level is less than a gray level threshold comprises:
acquiring the visual perception image in a period of time;
when the number of the non-shielding states in the visual sense images is smaller than the number of shielding states, determining that the visual sense images in the non-shielding states are invalid visual sense images, and determining that the visual sense images in the time period are shielding states;
and when the number of the shielding states in the plurality of the visual sense images is smaller than the number of the non-shielding states, determining that the visual sense images in the shielding states are invalid visual sense images, and determining that the visual sense images in the time period are the non-shielding states.
6. The method of detecting according to claim 1, wherein the step after determining the region of interest in the visually perceived image further comprises:
calculating a gray standard deviation value of the region of interest;
when the gray average value is within the gray average value threshold value range, calculating gray level gradient of the region of interest;
and when the gray level is smaller than a gray level threshold and the standard deviation value is smaller than a standard deviation value threshold, determining that the video image is in a shielding state.
7. The method of claim 1, wherein the step of determining a gray scale mean value of the region of interest comprises:
acquiring the number of pixels of the image of interest;
determining a gray value of each pixel;
and obtaining the gray average value according to the gray values.
8. The method of detecting according to claim 1, wherein the step of acquiring the visually perceived image comprises:
acquiring the target object types in the video image acquired by the camera to obtain the number of the target objects;
and when the number of the target objects in different categories is greater than or equal to a corresponding preset threshold value, determining an interested area in the visual perception image.
9. The method of detecting according to claim 1, wherein the step prior to the step of acquiring the visually perceived image comprises:
acquiring the current state of the vehicle and the starting state of the auxiliary driving function;
when the current state of the vehicle is a running state and the auxiliary driving function is a starting state, a visual perception image of the camera is acquired.
10. An in-vehicle system for performing the method of detecting camera occlusion of any one of claims 1 to 9, characterized in that the in-vehicle system comprises:
the acquisition module is used for acquiring a visual perception image, wherein the visual perception image is obtained by processing a video image acquired by a camera, and the visual perception image is used for representing target object information in the video image;
the region selection module is used for determining a region of interest in the visual perception image;
the calculation module is used for determining the gray average value of the region of interest;
and the comparison module is used for comparing the gray average value with an average value threshold range, and determining that the video image is in a shielding state when the gray average value is out of the average value threshold range.
11. A vehicle comprising the in-vehicle system of claim 10.
12. A computer readable storage medium having stored thereon program instructions, which when executed by a processor, implement the method of any of claims 1 to 9.
CN202311760196.XA 2023-12-19 2023-12-19 Camera shielding detection method, vehicle-mounted system, vehicle and storage medium Pending CN117893755A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311760196.XA CN117893755A (en) 2023-12-19 2023-12-19 Camera shielding detection method, vehicle-mounted system, vehicle and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311760196.XA CN117893755A (en) 2023-12-19 2023-12-19 Camera shielding detection method, vehicle-mounted system, vehicle and storage medium

Publications (1)

Publication Number Publication Date
CN117893755A true CN117893755A (en) 2024-04-16

Family

ID=90640244

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311760196.XA Pending CN117893755A (en) 2023-12-19 2023-12-19 Camera shielding detection method, vehicle-mounted system, vehicle and storage medium

Country Status (1)

Country Link
CN (1) CN117893755A (en)

Similar Documents

Publication Publication Date Title
CN109389650B (en) Vehicle-mounted camera calibration method and device, vehicle and storage medium
US7936903B2 (en) Method and a system for detecting a road at night
US9384401B2 (en) Method for fog detection
JP4930046B2 (en) Road surface discrimination method and road surface discrimination device
CN108629292B (en) Curved lane line detection method and device and terminal
US7970178B2 (en) Visibility range estimation method and system
EP2642364B1 (en) Method for warning the driver of a motor vehicle about the presence of an object in the surroundings of the motor vehicle, camera system and motor vehicle
JP4872769B2 (en) Road surface discrimination device and road surface discrimination method
CN113256739B (en) Self-calibration method and device for vehicle-mounted BSD camera and storage medium
CN110738081B (en) Abnormal road condition detection method and device
CN110378836B (en) Method, system and equipment for acquiring 3D information of object
KR101239718B1 (en) System and method for detecting object of vehicle surroundings
CN112597846A (en) Lane line detection method, lane line detection device, computer device, and storage medium
CN115273023A (en) Vehicle-mounted road pothole identification method and system and automobile
CN110991264A (en) Front vehicle detection method and device
CN115327572A (en) Method for detecting obstacle in front of vehicle
US8044998B2 (en) Sensing apparatus and method for vehicles
WO2023071024A1 (en) Driving assistance mode switching method, apparatus, and device, and storage medium
US20210049382A1 (en) Non-line of sight obstacle detection
US10970592B2 (en) Adhering substance detection apparatus and adhering substance detection method
CN112529011A (en) Target detection method and related device
CN117893755A (en) Camera shielding detection method, vehicle-mounted system, vehicle and storage medium
JP4039402B2 (en) Object detection device
JP3399113B2 (en) Vehicle travel path recognition device and alarm / travel control device using the same
US11836933B2 (en) Method for calculating information relative to a relative speed between an object and a camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination