CN114332721A - Camera device shielding detection method and device, electronic equipment and storage medium - Google Patents

Camera device shielding detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN114332721A
CN114332721A CN202111668682.XA CN202111668682A CN114332721A CN 114332721 A CN114332721 A CN 114332721A CN 202111668682 A CN202111668682 A CN 202111668682A CN 114332721 A CN114332721 A CN 114332721A
Authority
CN
China
Prior art keywords
image frame
current image
pixel
determining
camera device
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202111668682.XA
Other languages
Chinese (zh)
Inventor
李阳阳
许亮
毛宁元
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Original Assignee
Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Sensetime Lingang Intelligent Technology Co Ltd filed Critical Shanghai Sensetime Lingang Intelligent Technology Co Ltd
Priority to CN202111668682.XA priority Critical patent/CN114332721A/en
Publication of CN114332721A publication Critical patent/CN114332721A/en
Priority to PCT/CN2022/124951 priority patent/WO2023124387A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details

Abstract

The present disclosure provides a method and an apparatus for detecting occlusion of a camera device, an electronic device, and a storage medium, wherein the method includes: acquiring video data of a scene area through a camera device; performing face detection on a current image frame in the video data, and determining a pixel average value of the current image frame under the condition that a face is not detected; performing negation processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value to obtain an image subjected to negation processing; and determining whether the camera is blocked or not based on the image after the negation processing. According to the embodiment of the disclosure, the detection precision of the shielding of the camera device can be improved.

Description

Camera device shielding detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of image processing technologies, and in particular, to a method and an apparatus for detecting occlusion of an image capturing device, an electronic device, and a storage medium.
Background
With the technical development of image processing, scene analysis and behavior decision based on images are applied to more and more scenes. In most scenes, image processing relies on a camera device to acquire high-quality images, and if the camera device is shielded, images containing effective information are difficult to obtain, so that the difficulty of scene analysis and behavior decision based on the images is increased.
Taking a driver behavior detection scene in the vehicle cabin as an example, the camera device in the vehicle cabin can be used for restraining the driving behavior of the driver, so that the probability of traffic accidents is reduced, and the driving safety is assisted to be improved. However, in the actual use process, the image capturing device may be shielded, and if the image capturing device is shielded, the driver behavior cannot be accurately detected, so it is important to detect whether the image capturing device is shielded and how to improve the shielding detection precision of the image capturing device.
Disclosure of Invention
The embodiment of the disclosure at least provides a method and a device for detecting the shielding of a camera device, an electronic device and a storage medium, which not only can realize the shielding detection of the camera device, but also can improve the detection precision.
The embodiment of the disclosure provides a method for detecting shielding of a camera device, which includes:
acquiring video data of a scene area through a camera device;
performing face detection on a current image frame in the video data, and determining a pixel average value of the current image frame under the condition that a face is not detected;
performing negation processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value to obtain an image subjected to negation processing;
and determining whether the camera is blocked or not based on the image after the negation processing.
In the embodiment of the disclosure, the face detection is performed on the current image frame in the video data, the pixel average value of the current image frame is determined when the face is not detected, the inversion processing is performed on the pixel values of the pixels in the current image frame when the pixel average value of the current image frame is lower than a preset reference threshold, and then whether the camera is shielded or not is determined based on the image after the inversion processing.
In a possible implementation, the inverting the pixel values of the pixel points in the current image frame includes:
determining a target pixel point from pixel points in the current image frame, wherein the pixel value of the target pixel point is greater than a first preset pixel value and less than a second preset pixel value;
and performing inversion processing on the pixel value of the target pixel point.
In the embodiment of the disclosure, the negation processing is performed on the pixel value which is greater than the first preset pixel value and smaller than the second preset pixel value in the current image frame, so that a darker pixel point in the current image frame can be determined based on the first preset pixel value and the second preset pixel value, and the pixel value of the pixel point is negated, thereby improving the judgment precision of the shielding state of the camera device.
In one possible embodiment, the first preset pixel value and the second preset pixel value are determined by an imaging parameter of the imaging device.
In one possible embodiment, the determining whether the image capturing device is occluded based on the image after the negating process includes:
determining a maximum connected domain in the image after the negation processing;
and under the condition that the area of the maximum connected domain is larger than a preset area threshold value, determining that the shielding detection result of the current image frame is that the camera device is shielded.
In the embodiment of the disclosure, if the area of the maximum connected domain in the image after the inversion processing is larger than the preset area threshold, it is determined that the image capturing device is shielded, so that the accuracy of judging the shielding state of the image capturing device can be improved according to the area of the maximum connected domain.
In a possible implementation manner, the determining that the image capturing apparatus is occluded as the occlusion detection result of the current image frame in the case that the maximum connected component is greater than a preset component threshold includes:
determining the maximum connected domain in the image after the inversion processing of at least one frame of image before and/or after the current image frame in the video data under the condition that the area of the maximum connected domain is larger than the preset area threshold;
determining an average value of areas of the largest connected domains in the image subjected to the inversion processing of the current image frame and in the image subjected to the inversion processing of at least one frame of image before and/or after the current image frame;
and determining that the shielding detection result of the current image frame is that the camera device is shielded under the condition that the average value is larger than the preset area threshold value.
In the embodiment of the disclosure, when the area of the maximum connected domain is greater than the preset area threshold, the average value of the areas of the maximum connected domain in the image subjected to the inversion processing of the current image frame and the image subjected to the inversion processing of at least one image before and/or after the current image frame is determined, and the camera device is determined to be shielded.
In one possible embodiment, the method further comprises:
and outputting first prompt information under the condition that the camera device is determined to be blocked.
In the embodiment of the disclosure, if it is determined that the image pickup device is shielded, the first prompt information including that the image pickup device is shielded is output, so that the image pickup device can be timely prompted to be shielded.
In one possible embodiment, the outputting the first prompt information when it is determined that the image capturing apparatus is occluded includes:
determining the continuous shielding time of the camera device according to the detection result of the camera device of each frame of image in the video data;
and outputting the first prompt message under the condition that the continuous shielding time reaches the preset time.
In the embodiment of the disclosure, the continuous shielding time of the image pickup device is determined according to the detection result of the image pickup device of each frame of image in the video data, and if the continuous shielding time reaches the preset time, the first prompt information is output.
In a possible implementation, the determining the average value of the pixels of the current image frame in the case that the human face is not detected includes:
under the condition that a human face is not detected, carrying out noise reduction processing on the current image frame;
and determining the pixel average value of the current image frame after the noise reduction processing.
In the embodiment of the disclosure, the noise reduction processing is performed on the current image frame, so that imaging snowflake noise can be reduced, the definition of the image is improved, and the determination precision of the pixel average value is further improved.
In one possible embodiment, the scene area includes a vehicle driving area, and the method further includes:
acquiring the state information of the vehicle under the condition that the camera device is determined not to be shielded;
and generating second prompt information under the condition that the vehicle is in a running state at the moment corresponding to the current image frame according to the state information of the vehicle.
The embodiment of the present disclosure further provides a camera device shielding detection device, including:
the acquisition module is used for acquiring video data of a scene area through the camera device;
the detection module is used for carrying out face detection on a current image frame in the video data and determining the pixel average value of the current image frame under the condition that a face is not detected;
the processing module is used for performing inversion processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value to obtain an inverted image;
and the judging module is used for determining whether the camera device is blocked or not based on the image subjected to the negation processing.
In a possible implementation, the processing module is specifically configured to:
determining a target pixel point from pixel points in the current image frame, wherein the pixel value of the target pixel point is greater than a first preset pixel value and less than a second preset pixel value;
and performing inversion processing on the pixel value of the target pixel point.
In one possible embodiment, the first preset pixel value and the second preset pixel value are determined by an imaging parameter of the imaging device.
In a possible implementation manner, the determining module is specifically configured to:
determining a maximum connected domain in the image after the negation processing;
and under the condition that the area of the maximum connected domain is larger than a preset area threshold value, determining that the shielding detection result of the current image frame is that the camera device is shielded.
In a possible implementation manner, the determining module is specifically configured to:
determining the maximum connected domain in the image after the inversion processing of at least one frame of image before and/or after the current image frame in the video data under the condition that the area of the maximum connected domain is larger than the preset area threshold;
determining an average value of areas of the largest connected domains in the image subjected to the inversion processing of the current image frame and in the image subjected to the inversion processing of at least one frame of image before and/or after the current image frame;
and determining that the shielding detection result of the current image frame is that the camera device is shielded under the condition that the average value is larger than the preset area threshold value.
In a possible embodiment, the apparatus further comprises:
and the output module is used for outputting first prompt information under the condition that the camera device is determined to be blocked.
In a possible implementation, the output module is specifically configured to:
determining the continuous shielding time of the camera device according to the detection result of the camera device of each frame of image in the video data;
and outputting the first prompt message under the condition that the continuous shielding time reaches the preset time.
In a possible implementation, the detection module is specifically configured to:
under the condition that a human face is not detected, carrying out noise reduction processing on the current image frame;
and determining the pixel average value of the current image frame after the noise reduction processing.
In one possible embodiment, the scene area comprises a driving area of the vehicle,
the acquisition module is further used for acquiring the state information of the vehicle under the condition that the camera device is determined not to be shielded;
the output module is further used for generating second prompt information under the condition that the vehicle is in a running state at the moment corresponding to the current image frame according to the state information of the vehicle.
An embodiment of the present disclosure further provides an electronic device, including: the device comprises a processor, a memory and a bus, wherein the memory stores machine-readable instructions executable by the processor, when the electronic device runs, the processor and the memory communicate through the bus, and the machine-readable instructions are executed by the processor to execute the camera occlusion detection method in any one of the possible embodiments.
The present disclosure also provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the method for detecting occlusion in an image capturing device according to any one of the above-mentioned possible embodiments is executed.
In order to make the aforementioned objects, features and advantages of the present disclosure more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present disclosure, the drawings required for use in the embodiments will be briefly described below, and the drawings herein incorporated in and forming a part of the specification illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the technical solutions of the present disclosure. It is appreciated that the following drawings depict only certain embodiments of the disclosure and are therefore not to be considered limiting of its scope, for those skilled in the art will be able to derive additional related drawings therefrom without the benefit of the inventive faculty.
Fig. 1 shows a flowchart of a first method for detecting occlusion of an image capturing device according to an embodiment of the present disclosure;
FIG. 2 illustrates a flow chart of a method for determining a pixel average value of a current image frame provided by an embodiment of the present disclosure;
fig. 3 is a flowchart illustrating a method for inverting pixel values of pixel points in a current image frame according to an embodiment of the present disclosure;
fig. 4 shows a flowchart of a second method for detecting occlusion of an image capturing device according to an embodiment of the present disclosure;
fig. 5 is a flowchart illustrating a third method for detecting occlusion of an image capturing apparatus according to an embodiment of the disclosure;
fig. 6 shows a flowchart of a fourth method for detecting occlusion of an image capturing apparatus according to an embodiment of the present disclosure;
FIG. 7 is a flowchart illustrating a method for outputting prompt information based on image determination results according to an embodiment of the disclosure;
fig. 8 shows a schematic structural diagram of an image capturing device occlusion detection device provided in an embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of another image capturing device occlusion detection device provided in the embodiment of the present disclosure;
fig. 10 shows a schematic structural diagram of an electronic device provided in an embodiment of the present disclosure.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are only a part of the embodiments of the present disclosure, not all of the embodiments. The components of the embodiments of the present disclosure, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present disclosure, presented in the figures, is not intended to limit the scope of the claimed disclosure, but is merely representative of selected embodiments of the disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the disclosure without making creative efforts, shall fall within the protection scope of the disclosure.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined and explained in subsequent figures.
The term "and/or" herein merely describes an associative relationship, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
With the technical development of image processing, scene analysis and behavior decision based on images are applied to more and more scenes. In most scenes, image processing relies on a camera device to acquire high-quality images, and if the camera device is shielded, images containing effective information are difficult to obtain, so that the difficulty of scene analysis and behavior decision based on the images is increased.
Taking a driver behavior detection scene in the vehicle cabin as an example, the camera device in the vehicle cabin can be used for restraining the driving behavior of the driver, so that the probability of traffic accidents is reduced, and the driving safety is assisted to be improved. However, in actual use, the image pickup device may be shielded, and if the image pickup device is shielded, the driver behavior cannot be accurately detected.
Through research, in the prior art, although there is a method capable of detecting whether an image pickup device is occluded, for example, whether the image pickup device is occluded is determined by a face recognition method, the method is prone to erroneous judgment or missing judgment when the overall brightness of an image is low.
In view of the above problems, the embodiments of the present disclosure provide a method for detecting occlusion of a camera device, in which video data of a scene area is obtained by the camera device, then face detection is performed on a current image frame in the video data, determining the pixel average value of the current image frame under the condition that the human face is not detected, and under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value, the pixel values of the pixel points in the current image frame are subjected to negation processing, and then whether the camera device is shielded or not is determined based on the image subjected to negation processing, so that under the condition that a human face is not detected, further judging the current image frame, and performing negation processing on pixel points in the image under the condition of low image brightness, the pixel value can be mapped to the pixel value range which is easier to observe, and the judgment precision of the shielding state of the camera device is improved.
The execution main body of the camera device occlusion detection method provided by the embodiment of the disclosure is generally an electronic device or a server or other processing devices with certain computing capability, and the electronic device may be a mobile device, a handheld device, a computing device, a vehicle-mounted device, a wearable device, and the like. The server can be an independent physical server, a server cluster or a distributed system formed by a plurality of physical servers, and can also be a cloud server for providing basic cloud computing services such as cloud service, a cloud database, cloud computing, cloud storage, big data, an artificial intelligence platform and the like. In some possible implementations, the camera occlusion detection method may be implemented by a processor calling computer readable instructions stored in a memory.
The following describes a method for detecting occlusion in an imaging device according to an embodiment of the present disclosure.
Referring to fig. 1, a flowchart of a first method for detecting occlusion of an image capturing apparatus according to an embodiment of the present disclosure is shown, where the method includes the following steps S101 to S104:
and S101, acquiring video data of a scene area through an image pickup device.
The camera device is exemplarily a device capable of video recording a current scene area in real time. Taking a vehicle cabin scene as an example, the scene area includes a driving area, and the camera device may be installed inside the vehicle and configured to acquire video data of a driver in the driving area of the vehicle.
The camera is necessary hardware of a Driver Monitoring System (DMS). The driver monitoring system acquires images by using the camera, and intelligently detects and reminds the situations of fatigue driving, driving distraction, dangerous actions and the like of a driver in real time by technologies such as visual tracking, target detection, action recognition and the like so as to reduce the probability of traffic accidents. The shielding state of the camera device is detected, so that the camera device can be timely prompted to be adjusted to a normal non-shielding state under the shielding condition of the camera device, and driving safety is assisted to be improved.
It is understood that in other embodiments, the scene area may also be a panoramic area in the vehicle cabin or other areas in the vehicle cabin including the driving area of the vehicle, which is not limited herein.
The vehicle driving region is a region in which the driver can perform the vehicle driving operation. Vehicle driving operations include, but are not limited to, steering wheel control operations, accelerator pedal control operations, and the like. When a driver drives a vehicle in a vehicle driving area, the camera device can carry out video acquisition on the driving behavior and the physiological state of the driver in the vehicle cabin in real time.
It should be noted that the number of the image capturing devices may be set according to actual needs. Specifically, the setting may be performed according to the shooting angle and the shooting range of the image capturing device, or according to the cost, for example, one image capturing device may be used, and two or three image capturing devices may also be used, which is not limited herein.
Video data refers to a continuous sequence of images, which is essentially composed of a set of consecutive images, wherein an image Frame (Frame) is the smallest visual unit that makes up the video data, and is a static image. Temporally successive image frame sequences are composited together to form a motion video. Therefore, in order to facilitate subsequent detection, it is necessary to extract multiple frames of images in the video data.
Illustratively, since each second of the video data generally includes a plurality of frames of images (for example, each second includes 24 frames of images), in the process of extracting images from the real-time video data, frame extraction may be performed, where frame extraction refers to frame extraction performed according to a preset number of frames, for example, extracting one frame of image every 20 frames; the frame extraction may also be performed at preset time intervals, for example, the image is extracted every 10 ms.
The preset number of frames and time interval may be set according to actual requirements, and are not limited herein.
S102, carrying out face detection on a current image frame in the video data, and determining the pixel average value of the current image frame under the condition that a face is not detected.
The current image frame refers to an image which needs to be detected and identified currently, an image with a time sequence before the current image frame in the video data is called a preamble image frame, and an image with a time sequence after the current image frame is called a subsequent image frame.
For example, after the extraction of the multi-frame images, face detection may be performed on the extracted images to determine whether a face exists in the current image frame. Specifically, if a human face is detected in the current image frame, it is indicated that the region shot by the camera device contains the human face, and the camera device is not blocked; if the face is not detected in the current image frame, the current image frame needs to be further judged to distinguish that the face is not detected due to the fact that the camera device is shielded or the face does not exist in the acquisition area of the camera device, and therefore judgment accuracy is improved.
In this embodiment, if the face is not detected in the current image frame, the pixel average value of the current image frame is further determined. The pixel average value of the current image frame refers to an average value of pixels of pixel points in the current image frame, and is used for judging the overall brightness degree of the current image frame.
S103, performing negation processing on the pixel values of the pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value, and obtaining an image after negation processing.
After the pixel average value of the current image frame is determined, if the pixel average value of the current image frame is lower than a preset reference threshold, the current image frame is darker, and in order to improve the judgment precision, the pixel values of the pixel points in the current image frame can be inverted.
It can be understood that the preset reference threshold may be determined according to a scene where the camera device is located, for example, if the background of the driving area is mostly bright, the preset reference threshold at this time may be correspondingly set to a higher value, and if the background of the driving area is mostly dark, the preset reference threshold at this time may be correspondingly set to a lower value, so that the accuracy of the determination may be further improved.
In some embodiments, the preset reference threshold may be set to 80, and if the pixel value is lower than the preset reference threshold, it is determined that the whole current image frame is dark.
Specifically, the inversion processing refers to a processing mode of mapping the pixel value of the image to a pixel value range easier to analyze through inversion, that is, the inversion processing is to subtract the pixel value of the current pixel point from the pixel value 255 to obtain an inverted pixel value, for example, the pixel value of the pixel point in the current image frame is 103, and the pixel value after the inversion processing is 152.
And S104, determining whether the camera is blocked or not based on the image subjected to the negation processing.
And after the pixel values of the pixel points in the current image frame are subjected to negation processing, determining whether the camera device is shielded or not based on the image subjected to the negation processing.
In the embodiment of the disclosure, the face detection is performed on the current image frame in the video data, the pixel average value of the current image frame is determined when the face is not detected, the inversion processing is performed on the pixel values of the pixels in the current image frame when the pixel average value of the current image frame is lower than a preset reference threshold, and then whether the camera is shielded or not is determined based on the image after the inversion processing.
With reference to the above S102, referring to fig. 2, a flowchart of a method for determining a pixel average value of a current image frame according to an embodiment of the present disclosure includes the following steps S1021 to S1022:
and S1021, under the condition that the human face is not detected, carrying out noise reduction processing on the current image frame.
And S1022, determining the pixel average value of the current image frame after the noise reduction processing.
And after determining that the human face is not detected, performing noise reduction processing on the current image frame. Noise is an important cause of image interference, and various noises may exist in a frame of image in practical application, and these noises may be generated in transmission or quantization and other processes. Therefore, the process of reducing the noise in the image is important, and the accuracy of determining the pixel average value of the current image frame can be improved through the noise reduction processing.
According to the method and the device, the noise reduction processing is performed by adopting a Gaussian smoothing algorithm, the Gaussian smoothing algorithm is applied to the blurred image and is used for enabling the whole image to be evenly and smoothly transited, removing details and reducing imaging snowflake noise, so that the current image frame is clearer, and the determination accuracy of the pixel average value is favorably improved. In other embodiments, the noise reduction processing may also use a median filtering algorithm or a mean filtering algorithm to perform the noise reduction processing, which is not limited specifically.
With reference to the above S103, referring to fig. 3, a flowchart of a method for performing an inversion process on pixel values of pixel points in a current image frame according to an embodiment of the present disclosure includes the following steps S1031 to S1032:
and S1031, determining a target pixel point from the pixel points in the current image frame, wherein the pixel value of the target pixel point is greater than a first preset pixel value and less than a second preset pixel value.
It can be understood that, although the average pixel value of the current image frame is lower than the preset reference threshold, the average pixel value of each pixel point in the current image frame is not very low, and for a pixel point with a higher pixel value in the current image frame, the inversion processing is not required, so that before the inversion processing, a target pixel point requiring the inversion processing needs to be further determined. In this embodiment, the pixel points in the current image frame whose pixel values are greater than the first preset pixel value and less than the second preset pixel value are used as the target pixel points. For example, the first preset pixel value may be set to 90, the second preset pixel value may be set to 150, and if the pixel value of the pixel point is greater than the first preset pixel value and less than the second preset pixel value, it is proved that the pixel point is dark, and the pixel value inversion processing needs to be performed on the pixel point.
It should be noted that the first preset pixel value and the second preset pixel value are determined by the imaging parameter of the image pickup device, and the imaging parameter of the image pickup device determines the quality of the imaging effect of the image pickup device, that is, if the imaging effect of the image pickup device is better than that of other image pickup devices, the first preset pixel value and the second preset pixel value can be set to be higher pixel values, so that the shielding state of the image pickup device can be more accurately determined.
In the embodiment of the disclosure, a third preset pixel value is further set, and if the pixel value of the pixel point is smaller than the first preset pixel value, the pixel point is proved to be a black pixel point; if the pixel value of the pixel point is larger than the first preset pixel value and smaller than the second preset pixel value, the pixel point is proved to be darker; if the pixel value of the pixel point is larger than the second preset pixel value and smaller than the third preset pixel value, the pixel point is proved to be normal brightness; and if the pixel value of the pixel point is greater than the third preset pixel value, the pixel point is proved to be a white pixel point.
And S1032, performing negation processing on the pixel value of the target pixel point.
Exemplarily, after it is determined that the pixel average value of the current image frame is lower than the preset reference threshold, a target pixel point needs to be determined from the pixels in the current image frame, and the pixel value of the target pixel point is negated, so that a darker pixel point in the current image frame can be determined based on the first preset pixel value and the second preset pixel value, and the pixel value of the pixel point is negated, thereby improving the accuracy of determining the shielding state of the image pickup apparatus.
Referring to fig. 4, a flowchart of a second method for detecting occlusion of an image capturing device according to an embodiment of the present disclosure includes the following steps S201 to S210:
s201, video data of a scene area is acquired through an image pickup device.
The scene area may be, for example, a driving area of a vehicle. This step is similar to step S101 in fig. 1, and is not described herein again.
S202, carrying out face detection on a current image frame in the video data, and judging whether a face exists in the current image frame; if yes, go to step S210; if not, go to step S203.
This step is similar to step S102 in fig. 1, and is not described herein again.
S203, determining the pixel average value of the current image frame.
This step is similar to step S102 in fig. 1, and is not described herein again.
S204, judging whether the pixel average value of the current image frame is lower than a preset reference threshold value or not; if yes, go to step S206; if not, go to step S205.
Exemplarily, if the pixel average value of the current image frame is lower than the preset reference threshold, which indicates that the current image frame is dark, step S206 needs to be executed to perform negation processing on the pixel points of the current image frame, so as to improve the determination accuracy; if the average pixel value of the current image frame is not lower than the preset reference threshold, it indicates that the brightness of the current image frame is normal, and at this time, step S205 needs to be executed to directly determine the maximum communication domain of the current image frame.
S205, determining the maximum connected domain of the current image frame.
The maximum connected domain refers to an image region which has the same pixel value or all pixel values within a certain error and is composed of pixel points with adjacent positions. For example, the preset area threshold may be an area of a 60% region of the entire image, and if the maximum connected component exceeds the preset area threshold, it is determined that the image capturing device is blocked. It should be noted that the maximum connected domain is a closed region.
In this embodiment, after determining the maximum connected component of the current image frame, step S208 may be directly performed.
And S206, performing negation processing on the pixel values of the pixel points in the current image frame to obtain an image after negation processing.
This step is similar to step S103 in fig. 1, and is not described again here.
And S207, determining the maximum connected domain in the image after the inversion processing.
This step is similar to step S205 and will not be described herein.
S208, judging whether the area of the maximum connected domain is larger than a preset area threshold value or not; if yes, go to step S209; if not, go to step S210.
For example, if the area of the maximum connected domain is greater than a preset area threshold, step S209 is executed to determine that the image capturing device is blocked; and if the area of the maximum connected domain is smaller than or equal to the preset area threshold, executing the step S210, and determining that the camera device is not shielded.
It should be noted that the preset area threshold may be determined by the area size of the whole image frame, that is, if the area of the image is larger, the preset area threshold may be set to be a larger threshold, so that the shielding state of the image capturing device may be determined more accurately.
S209, determining that the shielding detection result of the current image frame is that the camera is shielded.
S210, determining that the shielding detection result of the current image frame is that the camera device is not shielded.
Referring to fig. 5, a flowchart of a third method for detecting occlusion in an image capturing apparatus according to an embodiment of the present disclosure is shown, which is different from the method in fig. 4, and further includes the following steps S211 to S213 after step S208:
s211, determining the maximum connected domain in the image after the image inversion processing of at least one frame before and/or after the current image frame in the video data.
It can be understood that after determining that the area of the maximum connected component of the image after the inversion processing is larger than the preset area threshold, it is necessary to further determine the maximum connected component of the image after the inversion processing of at least one frame of image before and/or after the current image frame in the video data, so as to prevent misjudgment due to an abrupt change condition, where the abrupt change condition is that the image capturing apparatus is suddenly blocked or the image capturing apparatus is suddenly unblocked.
S212, determining the average value of the areas of the maximum connected domains in the image subjected to the inversion processing of the current image frame and in the image subjected to the inversion processing of at least one frame of image before and/or after the current image frame.
After determining the maximum connected component in the image after the image inversion processing of at least one frame of image before and/or after the current image frame in the video data, it is necessary to further determine an average value of the areas of the maximum connected component in the image after the image inversion processing of the current image frame and in the image after the image inversion processing of at least one frame of image before and/or after the current image frame.
The average is the sum of all data in a set of data divided by the number of the set of data, and is used for reflecting the general condition and average level of a set of data.
S213, judging whether the average value is larger than the preset area threshold value; if yes, go to step S209; if not, go to step S210.
After the average value of the areas of the largest connected domains in the image subjected to the inversion processing of the current image frame and the image subjected to the inversion processing of at least one frame of image before and/or after the current image frame is determined, if the average value is larger than a preset area threshold value, the camera device is determined to be shielded, so that the sudden change condition can be eliminated by determining the average value of the areas of the largest connected domains, the shielding state of the camera device is not influenced by the sudden change condition, and the accuracy of judging the shielding state of the camera device can be improved.
S209, determining that the shielding detection result of the current image frame is that the camera is shielded.
S210, determining that the shielding detection result of the current image frame is that the camera device is not shielded.
Referring to fig. 6, a flowchart of a fourth method for detecting occlusion of an image capturing apparatus according to an embodiment of the present disclosure is shown, which is different from the method in fig. 1, and after step S104, the following step S105 is further included:
and S105, outputting first prompt information when the image pickup device is determined to be shielded.
Exemplarily, if it is determined that the camera device is shielded, the first prompt information that the camera device is shielded is output, so that the camera device of the driver can be prompted to be shielded, the driver can know the shielding state of the camera device more clearly, the shielded camera device is processed, the driver can pay attention to the driving behavior and adjust the driving behavior, and the driving experience of the driver is improved while the driving safety is improved.
The prompt message includes, but is not limited to, a voice prompt message, an audio (e.g., an alarm sound) prompt message, a text prompt message, and the like.
With reference to the above S105, referring to fig. 7, a flowchart of a method for outputting prompt information based on a determination result of an image according to an embodiment of the present disclosure is provided, where the method includes the following S1051 to S1052:
s1051, determining the continuous shielding time of the camera device according to the camera device detection result of each frame image in the video data.
S1052, outputting the first prompt information when the continuous shielding time reaches a preset time.
Exemplarily, the continuous shielding time of the image pickup device is determined according to the detection result of the image pickup device of each frame of image in the video data, and if the continuous shielding time reaches the preset time, the first prompt information is output, so that the first prompt information is output only under the condition that the image pickup device continuously shields, and the occurrence of frequent prompt information output is reduced.
In this embodiment, when the current image frame determines that the image capturing device is blocked, it is further determined whether the determination result of the subsequent frame image is also blocked, and if at least one subsequent frame image determines that the image capturing device is not blocked, it indicates that the current blocking is only flashed, and at this time, the first prompt information is not output; if the judgment result of at least one frame of the later frame image is that the camera device is blocked, that is, the continuous multi-frame image judges that the camera device is blocked, it indicates that the camera device is continuously blocked, intentionally blocked and not mistakenly blocked, and when the continuous blocking time reaches a preset time (for example, 5 seconds), first prompt information is output to prompt.
In some possible embodiments, taking five frames of images as an example, if all the first four frames of images are occluded and the occlusion time is found to be 4s by accumulating the occlusion time, and the preset time is 3s, it is determined that the image capturing device is continuously occluded, and at this time, the first prompt information that the image capturing device is occluded needs to be output.
In other possible embodiments, taking five frames of images as an example, if the first two frames of images are both occluded and are not occluded from the third frame of image, the occlusion time is found to be 2s by accumulating the occlusion times of the first two frames, and the preset time is 3s, which indicates that the occlusion is only temporarily occluded, and at this time, the first prompt information is not output.
In some possible embodiments, when the scene area includes a vehicle driving area, in the case that it is detected that the camera is not blocked according to the method, the state information of the vehicle may be further acquired, and it may be determined whether the vehicle is in a driving state at a time corresponding to the current image frame according to the state information of the vehicle. If the vehicle is determined to be in a driving state, because the human face is not detected from the current image frame at the moment and the camera device is not shielded, the situation that the driver of the vehicle leaves the driving area or the driver twists the head to face the back of the vehicle can be considered, and at the moment, second prompt information for the driver to leave the post can be generated. So as to accurately detect and prompt the driver to leave the driving area or to turn the head toward the rear row while the vehicle is running. Alternatively, if the vehicle is in a non-driving state, such as when the vehicle is parked, the driver may leave the driving area, and if it is determined that the camera is not occluded, the warning or prompt may not be issued if the driving area of the vehicle does not have a human face.
It will be understood by those skilled in the art that in the method of the present invention, the order of writing the steps does not imply a strict order of execution and any limitations on the implementation, and the specific order of execution of the steps should be determined by their function and possible inherent logic.
Based on the same inventive concept, the embodiment of the present disclosure further provides a camera device occlusion detection device corresponding to the camera device occlusion detection method, and since the principle of the device in the embodiment of the present disclosure for solving the problem is similar to that of the camera device occlusion detection method in the embodiment of the present disclosure, the implementation of the device may refer to the implementation of the method, and repeated details are not repeated.
Referring to fig. 8, which is a schematic structural diagram of an occlusion detection device of an image capturing apparatus according to an embodiment of the present disclosure, the device 500 includes:
an obtaining module 501, configured to obtain video data of a scene area through a camera;
a detection module 502, configured to perform face detection on a current image frame, and determine a pixel average value of the current image frame when a face is not detected;
the processing module 503 is configured to perform negation processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold, so as to obtain an image after negation processing;
a determining module 504, configured to determine whether the image capturing apparatus is occluded based on the image after the negation processing.
In a possible implementation manner, the processing module 503 is specifically configured to:
determining a target pixel point from pixel points in the current image frame, wherein the pixel value of the target pixel point is greater than a first preset pixel value and less than a second preset pixel value;
and performing inversion processing on the pixel value of the target pixel point.
In one possible embodiment, the first preset pixel value and the second preset pixel value are determined by an imaging parameter of the imaging device.
In a possible implementation manner, the determining module 504 is specifically configured to:
determining a maximum connected domain in the image after the negation processing;
and under the condition that the area of the maximum connected domain is larger than a preset area threshold value, determining that the shielding detection result of the current image frame is that the camera device is shielded.
In a possible implementation manner, the determining module 504 is specifically configured to:
determining the maximum connected domain in the image after the inversion processing of at least one frame of image before and/or after the current image frame in the video data under the condition that the area of the maximum connected domain is larger than the preset area threshold;
determining an average value of areas of maximum connected domains in the image subjected to the inversion processing of the current image frame and in at least one image subjected to the inversion processing before and/or after the current image frame;
and determining that the shielding detection result of the current image frame is that the camera device is shielded under the condition that the average value is larger than the preset area threshold value.
Referring to fig. 9, in a possible embodiment, the apparatus further comprises:
and an output module 505, configured to output the first prompt information when it is determined that the image capturing apparatus is blocked.
In a possible implementation, the output module 505 is specifically configured to:
determining the continuous shielding time of the camera device according to the detection result of the camera device of each frame of image in the video data; and outputting the first prompt message under the condition that the shielding duration time reaches the preset time.
In a possible implementation, the detection module 502 is specifically configured to:
under the condition that a human face is not detected, carrying out noise reduction processing on the current image frame;
and determining the pixel average value of the current image frame after the noise reduction processing.
In one possible embodiment, the scene area comprises a driving area of the vehicle,
the obtaining module 501 is further configured to obtain status information of a vehicle when it is determined that the camera is not blocked;
the output module 505 is further configured to generate second prompt information when it is determined that the vehicle is in a driving state at a time corresponding to the current image frame according to the state information of the vehicle.
The description of the processing flow of each module in the device and the interaction flow between the modules may refer to the related description in the above method embodiments, and will not be described in detail here.
Based on the same technical concept, the embodiment of the application also provides the electronic equipment. Referring to fig. 10, a schematic structural diagram of an electronic device 700 provided in the embodiment of the present application includes a processor 701, a memory 702, and a bus 703. The memory 702 is used for storing execution instructions and includes a memory 7021 and an external memory 7022; the memory 7021 is also referred to as an internal memory and temporarily stores operation data in the processor 701 and data exchanged with an external memory 7022 such as a hard disk, and the processor 701 exchanges data with the external memory 7022 via the memory 7021.
In this embodiment, the memory 702 is specifically configured to store application program codes for executing the scheme of the present application, and is controlled by the processor 701 to execute. That is, when the electronic device 700 is operated, the processor 701 and the memory 702 communicate with each other via the bus 703, so that the processor 701 executes the application program code stored in the memory 702 to perform the method disclosed in any of the foregoing embodiments.
The Memory 702 may be, but is not limited to, a Random Access Memory (RAM), a Read Only Memory (ROM), a Programmable Read-Only Memory (PROM), an Erasable Read-Only Memory (EPROM), an electrically Erasable Read-Only Memory (EEPROM), and the like.
The processor 701 may be an integrated circuit chip having signal processing capabilities. The Processor may be a general-purpose Processor, including a Central Processing Unit (CPU), a Network Processor (NP), and the like; but may also be a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other programmable logic device, discrete gate or transistor logic device, discrete hardware components. The various methods, steps and logic blocks disclosed in the embodiments of the present invention may be implemented or performed. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
It is to be understood that the illustrated structure of the embodiment of the present application does not specifically limit the electronic device 700. In other embodiments of the present application, the electronic device 700 may include more or fewer components than shown, or combine certain components, or split certain components, or a different arrangement of components. The illustrated components may be implemented in hardware, software, or a combination of software and hardware.
The embodiment of the present disclosure further provides a computer-readable storage medium, where a computer program is stored on the computer-readable storage medium, and when the computer program is executed by a processor, the steps of the method for detecting occlusion of an image capturing device in the above method embodiments are executed. The storage medium may be a volatile or non-volatile computer-readable storage medium.
The computer program product of the passage verification method provided in the embodiment of the present disclosure includes a computer-readable storage medium storing a program code, where instructions included in the program code may be used to execute the steps of the method for detecting shielding of the image capturing device in the above method embodiment, which may be referred to in the above method embodiment specifically, and are not described herein again.
The computer program product may be implemented by hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
It is clear to those skilled in the art that, for convenience and brevity of description, the specific working processes of the system and the apparatus described above may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again. In the several embodiments provided in the present disclosure, it should be understood that the disclosed system, apparatus, and method may be implemented in other ways. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and there may be other divisions when actually implemented, and for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection of devices or units through some communication interfaces, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
In addition, functional units in the embodiments of the present disclosure may be integrated into one processing unit, or each unit may exist alone physically, or two or more units are integrated into one unit.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present disclosure may be embodied in the form of a software product, which is stored in a storage medium and includes several instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present disclosure. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are merely specific embodiments of the present disclosure, which are used for illustrating the technical solutions of the present disclosure and not for limiting the same, and the scope of the present disclosure is not limited thereto, and although the present disclosure is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive of the technical solutions described in the foregoing embodiments or equivalent technical features thereof within the technical scope of the present disclosure; such modifications, changes or substitutions do not depart from the spirit and scope of the embodiments of the present disclosure, and should be construed as being included therein. Therefore, the protection scope of the present disclosure shall be subject to the protection scope of the claims.

Claims (12)

1. A method for detecting occlusion of an image pickup apparatus includes:
acquiring video data of a scene area through a camera device;
performing face detection on a current image frame in the video data, and determining a pixel average value of the current image frame under the condition that a face is not detected;
performing negation processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value to obtain an image subjected to negation processing;
and determining whether the camera is blocked or not based on the image after the negation processing.
2. The method of claim 1, wherein the inverting the pixel values of the pixels in the current image frame comprises:
determining a target pixel point from pixel points in the current image frame, wherein the pixel value of the target pixel point is greater than a first preset pixel value and less than a second preset pixel value;
and performing inversion processing on the pixel value of the target pixel point.
3. The method according to claim 1, wherein the first preset pixel value and the second preset pixel value are determined by imaging parameters of the image capturing device.
4. The method of claim 1, wherein the determining whether the camera is occluded based on the inverted image comprises:
determining a maximum connected domain in the image after the negation processing;
and under the condition that the area of the maximum connected domain is larger than a preset area threshold value, determining that the shielding detection result of the current image frame is that the camera device is shielded.
5. The method according to claim 4, wherein the determining that the image capturing device is occluded as the occlusion detection result of the current image frame in the case that the area of the maximum connected component is larger than a preset area threshold value comprises:
determining the maximum connected domain in the image after the inversion processing of at least one frame of image before and/or after the current image frame in the video data under the condition that the area of the maximum connected domain is larger than the preset area threshold;
determining an average value of areas of the largest connected domains in the image subjected to the inversion processing of the current image frame and in the image subjected to the inversion processing of at least one frame of image before and/or after the current image frame;
and determining that the shielding detection result of the current image frame is that the camera device is shielded under the condition that the average value is larger than the preset area threshold value.
6. The method of claim 1, further comprising:
and outputting first prompt information under the condition that the camera device is determined to be blocked.
7. The method according to claim 6, wherein the outputting of the first prompt information in the case where it is determined that the image pickup apparatus is occluded comprises:
determining the continuous shielding time of the camera device according to the detection result of the camera device of each frame of image in the video data;
and outputting the first prompt message under the condition that the continuous shielding time reaches the preset time.
8. The method of claim 1, wherein determining the pixel average of the current image frame if no face is detected comprises:
under the condition that a human face is not detected, carrying out noise reduction processing on the current image frame;
and determining the pixel average value of the current image frame after the noise reduction processing.
9. The method of any one of claims 1 to 8, wherein the scene area comprises a vehicle driving area, the method further comprising:
acquiring the state information of the vehicle under the condition that the camera device is determined not to be shielded;
and generating second prompt information under the condition that the vehicle is in a running state at the moment corresponding to the current image frame according to the state information of the vehicle.
10. A camera device occlusion detection device, comprising:
the acquisition module is used for acquiring video data of a scene area through the camera device;
the detection module is used for carrying out face detection on a current image frame in the video data and determining the pixel average value of the current image frame under the condition that a face is not detected;
the processing module is used for performing inversion processing on pixel values of pixel points in the current image frame under the condition that the pixel average value of the current image frame is lower than a preset reference threshold value to obtain an inverted image;
and the judging module is used for determining whether the camera device is blocked or not based on the image subjected to the negation processing.
11. An electronic device, comprising: a processor, a memory and a bus, the memory storing machine readable instructions executable by the processor, the processor and the memory communicating via the bus when the electronic device is running, the machine readable instructions when executed by the processor performing the camera occlusion detection method of any one of claims 1 to 9.
12. A computer-readable storage medium, having stored thereon a computer program which, when executed by a processor, performs the image pickup apparatus occlusion detection method according to any one of claims 1 to 9.
CN202111668682.XA 2021-12-31 2021-12-31 Camera device shielding detection method and device, electronic equipment and storage medium Pending CN114332721A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202111668682.XA CN114332721A (en) 2021-12-31 2021-12-31 Camera device shielding detection method and device, electronic equipment and storage medium
PCT/CN2022/124951 WO2023124387A1 (en) 2021-12-31 2022-10-12 Photographing apparatus obstruction detection method and apparatus, electronic device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111668682.XA CN114332721A (en) 2021-12-31 2021-12-31 Camera device shielding detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN114332721A true CN114332721A (en) 2022-04-12

Family

ID=81021373

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111668682.XA Pending CN114332721A (en) 2021-12-31 2021-12-31 Camera device shielding detection method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN114332721A (en)
WO (1) WO2023124387A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115861624A (en) * 2023-03-03 2023-03-28 天津所托瑞安汽车科技有限公司 Method, device and equipment for detecting shielding of camera and storage medium
CN116071724A (en) * 2023-03-03 2023-05-05 安徽蔚来智驾科技有限公司 Vehicle-mounted camera shielding scene recognition method, electronic equipment, storage medium and vehicle
WO2023124387A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Photographing apparatus obstruction detection method and apparatus, electronic device, storage medium, and computer program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117424988B (en) * 2023-12-15 2024-03-15 浙江大学台州研究院 Image processing system and processing method for intelligently managing welding machine

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5841903A (en) * 1992-01-17 1998-11-24 Yamaha Corporation Method and device for extracting a connected component of image data
CN106599783A (en) * 2016-11-09 2017-04-26 浙江宇视科技有限公司 Video occlusion detection method and device
WO2018076614A1 (en) * 2016-10-31 2018-05-03 武汉斗鱼网络科技有限公司 Live video processing method, apparatus and device, and computer readable medium
CN110059634A (en) * 2019-04-19 2019-07-26 山东博昂信息科技有限公司 A kind of large scene face snap method
CN110557628A (en) * 2018-06-04 2019-12-10 杭州海康威视数字技术股份有限公司 Method and device for detecting shielding of camera and electronic equipment
CN110913209A (en) * 2019-12-05 2020-03-24 杭州飞步科技有限公司 Camera shielding detection method and device, electronic equipment and monitoring system
CN112927178A (en) * 2019-11-21 2021-06-08 中移物联网有限公司 Occlusion detection method, occlusion detection device, electronic device, and storage medium
CN113705332A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for detecting shielding of camera of vehicle-mounted terminal, vehicle-mounted terminal and vehicle

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP6052057B2 (en) * 2013-05-22 2016-12-27 ソニー株式会社 Signal processing device and signal processing method, solid-state imaging device, and electronic apparatus
CN111862228B (en) * 2020-06-04 2023-11-10 福瑞泰克智能系统有限公司 Occlusion detection method, system, computer device and readable storage medium
CN114332721A (en) * 2021-12-31 2022-04-12 上海商汤临港智能科技有限公司 Camera device shielding detection method and device, electronic equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5841903A (en) * 1992-01-17 1998-11-24 Yamaha Corporation Method and device for extracting a connected component of image data
WO2018076614A1 (en) * 2016-10-31 2018-05-03 武汉斗鱼网络科技有限公司 Live video processing method, apparatus and device, and computer readable medium
CN106599783A (en) * 2016-11-09 2017-04-26 浙江宇视科技有限公司 Video occlusion detection method and device
CN110557628A (en) * 2018-06-04 2019-12-10 杭州海康威视数字技术股份有限公司 Method and device for detecting shielding of camera and electronic equipment
CN110059634A (en) * 2019-04-19 2019-07-26 山东博昂信息科技有限公司 A kind of large scene face snap method
CN112927178A (en) * 2019-11-21 2021-06-08 中移物联网有限公司 Occlusion detection method, occlusion detection device, electronic device, and storage medium
CN110913209A (en) * 2019-12-05 2020-03-24 杭州飞步科技有限公司 Camera shielding detection method and device, electronic equipment and monitoring system
CN113705332A (en) * 2021-07-14 2021-11-26 深圳市有为信息技术发展有限公司 Method and device for detecting shielding of camera of vehicle-mounted terminal, vehicle-mounted terminal and vehicle

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2023124387A1 (en) * 2021-12-31 2023-07-06 上海商汤智能科技有限公司 Photographing apparatus obstruction detection method and apparatus, electronic device, storage medium, and computer program product
CN115861624A (en) * 2023-03-03 2023-03-28 天津所托瑞安汽车科技有限公司 Method, device and equipment for detecting shielding of camera and storage medium
CN116071724A (en) * 2023-03-03 2023-05-05 安徽蔚来智驾科技有限公司 Vehicle-mounted camera shielding scene recognition method, electronic equipment, storage medium and vehicle
CN115861624B (en) * 2023-03-03 2023-05-30 天津所托瑞安汽车科技有限公司 Method, device, equipment and storage medium for detecting occlusion of camera
CN116071724B (en) * 2023-03-03 2023-08-04 安徽蔚来智驾科技有限公司 Vehicle-mounted camera shielding scene recognition method, electronic equipment, storage medium and vehicle

Also Published As

Publication number Publication date
WO2023124387A1 (en) 2023-07-06

Similar Documents

Publication Publication Date Title
CN114332721A (en) Camera device shielding detection method and device, electronic equipment and storage medium
KR101271092B1 (en) Method and apparatus of real-time segmentation for motion detection in surveillance camera system
JP6762344B2 (en) Methods and systems to track the position of the face and alert the user
CN112329719B (en) Behavior recognition method, behavior recognition device and computer-readable storage medium
CN110913209B (en) Camera shielding detection method and device, electronic equipment and monitoring system
CN113487521A (en) Self-encoder training method and component, abnormal image detection method and component
CN113869137A (en) Event detection method and device, terminal equipment and storage medium
CN111444788A (en) Behavior recognition method and device and computer storage medium
US10115028B2 (en) Method and device for classifying an object in an image
CN113542868A (en) Video key frame selection method and device, electronic equipment and storage medium
CN111696064B (en) Image processing method, device, electronic equipment and computer readable medium
WO2023124385A1 (en) Photographic apparatus shielding detection method and apparatus, and electronic device, storage medium and computer program product
CN110765875B (en) Method, equipment and device for detecting boundary of traffic target
WO2012081969A1 (en) A system and method to detect intrusion event
CN113239738B (en) Image blurring detection method and blurring detection device
CN113421317B (en) Method and system for generating image and electronic equipment
CN112419635B (en) Perimeter alarm method integrating grating and video
Mustafah et al. Face detection system design for real time high resolution smart camera
CN114495252A (en) Sight line detection method and device, electronic equipment and storage medium
CN111275045B (en) Image main body recognition method and device, electronic equipment and medium
CN115393782A (en) Image processing method, image processing device, electronic equipment and storage medium
KR20220027769A (en) A protection method of privacy using contextual blocking
CN113128505A (en) Method, device, equipment and storage medium for detecting local visual confrontation sample
CN112070954A (en) Living body identification method, living body identification device, living body identification equipment and storage medium
CN111950502B (en) Obstacle object-based detection method and device and computer equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40063413

Country of ref document: HK