CN112561874A - Blocking object detection method and device and monitoring camera - Google Patents

Blocking object detection method and device and monitoring camera Download PDF

Info

Publication number
CN112561874A
CN112561874A CN202011460219.1A CN202011460219A CN112561874A CN 112561874 A CN112561874 A CN 112561874A CN 202011460219 A CN202011460219 A CN 202011460219A CN 112561874 A CN112561874 A CN 112561874A
Authority
CN
China
Prior art keywords
depth
camera
image
tof
monitoring camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011460219.1A
Other languages
Chinese (zh)
Inventor
邵响
苏星
沈林杰
浦世亮
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN202011460219.1A priority Critical patent/CN112561874A/en
Publication of CN112561874A publication Critical patent/CN112561874A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/50Depth or shape recovery
    • G06T7/55Depth or shape recovery from multiple images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/90Determination of colour characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/50Constructional details
    • H04N23/55Optical parts specially adapted for electronic image sensors; Mounting thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/2624Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects for obtaining an image which is composed of whole input images, e.g. splitscreen
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10048Infrared image

Abstract

The embodiment of the invention provides a method and a device for detecting a blocking object and a monitoring camera. The scheme is as follows: acquiring a depth image and a gray image acquired by N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the depth value of the corresponding pixel point in the depth image; for the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of the monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle. By the technical scheme provided by the embodiment of the invention, the computation time complexity and the space complexity in the obstruction detection process are reduced, and the accuracy of the obstruction detection result is effectively improved.

Description

Blocking object detection method and device and monitoring camera
Technical Field
The invention relates to the technical field of security monitoring, in particular to a method and a device for detecting a sheltering object and a monitoring camera.
Background
With the popularization of security monitoring systems, a large number of security monitoring devices such as monitoring cameras are installed in public places such as communities, retirement homes, hospitals, office buildings, factories and hotels. However, since the camera is exposed outside for a long time without being maintained by people, the camera may be shielded by shielding objects such as small winged insects, branches, leaves, cables and the like, so that the security monitoring equipment cannot normally acquire images. Therefore, the detection of the shielding object is an indispensable step in the normal working process of the security monitoring equipment.
In the related technology, an RGB (Red Green Blue ) camera can be used, and an RGB background model is established according to the collected scene information, so that the difference between the foreground and the background is counted to detect whether a blocking object exists in front of the camera. However, since the RGB background model is built by using RGB data, the distance from the foreground to the camera cannot be accurately determined, and thus whether the difference between the foreground and the background is caused by the camera lens being blocked cannot be accurately determined, which will affect the accuracy of the detection result of the blocking object. In addition, the establishment of the RGB background model increases the computational time complexity and the spatial complexity of the occlusion detection process.
Disclosure of Invention
The embodiment of the invention aims to provide a method and a device for detecting an obstruction and a monitoring camera, so as to reduce the complexity of computing time and space in the process of detecting the obstruction and improve the accuracy of the detection result of the obstruction. The specific technical scheme is as follows:
the embodiment of the invention provides a shelter detecting method, which is applied to a monitoring camera and a monitoring camera, wherein the monitoring camera comprises a monitoring camera and N Time of Flight (TOF) cameras, the spliced field angle of the N TOF cameras covers the field angle of the monitoring camera, and the spliced field angle is obtained by splicing the field angles of the N TOF cameras, and the method comprises the following steps:
acquiring a depth image and a gray image acquired by the N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image;
for the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of the monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
Optionally, the method further includes:
and if pixel points with depth values within the depth range to be detected do not exist in the depth image acquired by each TOF camera, determining that no shielding object exists in front of the monitoring camera.
Optionally, before obtaining the depth image and the grayscale image acquired by the N TOF cameras at the current time, the method further includes:
and synchronizing the image acquisition time of the monitoring camera and the image acquisition time of the N TOF cameras.
Optionally, the method further includes:
acquiring a monitoring image acquired by the monitoring camera at the current moment;
and when the fact that the shielding object exists in front of the monitoring camera is determined, discarding the monitoring image acquired at the same time.
Optionally, the step of discarding the monitoring images acquired at the same time includes:
and if the depth values in the depth images acquired by the N TOF cameras are in the depth range to be detected and the number of pixel points of which the confidence degrees corresponding to the depth values are greater than the preset confidence degree threshold value is greater than the preset number threshold value, discarding the monitoring images acquired at the same moment.
Optionally, in the depth image/the grayscale image, the size of the confidence corresponding to the same depth value is directly proportional to the size of the object in the region corresponding to the depth value in the acquired scene.
Optionally, the method further includes:
and when the fact that an obstruction exists in front of the monitoring camera is determined, generating a prompt message aiming at the obstruction.
Optionally, the N TOF cameras include one or more of a single-point direct Time of Flight (dTOF) camera, a linear array dTOF camera, and a surface array dTOF camera.
The embodiment of the invention also provides a shelter detection device, which is applied to a monitoring camera, wherein the monitoring camera comprises a monitoring camera and N TOF cameras, the spliced field angle of the N TOF cameras covers the field angle of the monitoring camera, and the spliced field angle is obtained by splicing the field angles of the N TOF cameras, and the device comprises:
the first acquisition module is used for acquiring a depth image and a gray image acquired by the N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the depth value of the corresponding pixel point in the depth image;
the first determining module is used for determining that a shielding object exists in front of the monitoring camera if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by each TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value; wherein the depth range to be detected is determined based on the stitching field angle.
The embodiment of the invention also provides a monitoring camera, which comprises a monitoring camera, a main control unit and N TOF cameras; the splicing field angles of the N TOF cameras cover the field angle of the monitoring camera, and the splicing field angle is obtained by splicing the field angles of the N TOF cameras;
the monitoring camera is used for acquiring a monitoring image at the current moment;
the N TOF cameras are used for acquiring a depth image and a gray image at the current moment, pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the depth value of the corresponding pixel point in the depth image;
the main control unit is used for acquiring the monitoring image, the depth image and the gray level image, and determining that a shielding object exists in front of the monitoring camera if pixel points with depth values within a depth range to be detected exist in the depth image acquired by each TOF camera and the confidence coefficient of the depth values of the pixel points is greater than a preset confidence coefficient threshold value aiming at the depth image acquired by each TOF camera; wherein the depth range to be detected is determined based on the stitching field angle.
Optionally, the monitoring camera is an RGB camera, an Infrared (IR) camera, or a thermal imaging camera.
Optionally, the main control unit synchronizes image acquisition time of the monitoring camera and the N TOF cameras through a General-purpose input/output port (GPIO).
The embodiment of the invention has the following beneficial effects:
according to the method and the device for detecting the blocking object and the monitoring camera provided by the embodiment of the invention, the depth image and the gray image can be acquired by utilizing the N TOF cameras included in the monitoring camera, and when the depth image has the pixel point of which the depth value is in the depth range to be detected and the confidence coefficient of the depth value is greater than the preset confidence coefficient threshold value, the blocking object is determined to exist in front of the monitoring camera. Compared with the related technology, the spliced view angle obtained by splicing the view angles of the N TOF cameras covers the view angle of the monitoring camera, so that whether a shelter exists in the depth range to be detected determined based on the spliced view angle of the N TOF cameras can be accurately detected directly according to the depth value of each pixel point in the depth image and the confidence coefficient of the depth value of each pixel point, namely whether the shelter exists in front of the monitoring camera is accurately detected, the establishment process of an RGB background model in the related technology is avoided, the detection time required by the shelter detection process is effectively shortened, the memory of the monitoring camera is saved, and the calculation time complexity and the space complexity of the shelter detection process are reduced; and because the TOF camera has a large measurement dynamic range and high measurement precision, the accuracy of the detection result of the shielding object is effectively improved.
Of course, not all of the advantages described above need to be achieved at the same time in the practice of any one product or method of the invention.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other embodiments can be obtained by using the drawings without creative efforts.
FIG. 1-a is a schematic view of a surveillance camera provided in accordance with an embodiment of the present invention;
FIG. 1-b is a schematic diagram of a relationship between a monitored image and a depth image according to an embodiment of the present invention;
1-c is a first schematic view of a tiled field of view provided by an embodiment of the present invention;
FIG. 1-d is a second schematic view of a tiled field of view according to an embodiment of the present invention;
FIG. 2 is a first schematic flow chart of a method for detecting an obstruction according to an embodiment of the present invention;
FIG. 3 is a second flowchart of a method for detecting an obstruction according to an embodiment of the invention;
FIG. 4 is a third schematic flow chart of a method for detecting an obstruction according to an embodiment of the invention;
FIG. 5 is a fourth flowchart illustrating a method for detecting an obstruction according to an embodiment of the invention;
FIG. 6 is a schematic flow chart of a method for detecting an obstruction according to an embodiment of the present invention;
fig. 7-a is a schematic view of a first structure of a monitoring camera according to an embodiment of the present invention;
FIG. 7-b is a signaling diagram of an obstruction detection process based on the surveillance camera shown in FIG. 7-a;
FIG. 8 is a schematic structural diagram of an obstruction detection device according to an embodiment of the invention;
fig. 9 is a schematic diagram of a second structure of the monitoring camera according to the embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
When the RGB camera is used for detecting the shielding object in the related technology, due to the establishment of the RGB background model, more calculation needs to be carried out, and more memory is occupied, so that the calculation time complexity and the space complexity in the shielding object detection process are increased.
In addition, because depth information is missing in the RGB data for establishing the RGB background model, when detecting an occlusion object, it cannot be accurately determined whether a difference value between a foreground and a background is caused by the occlusion of a camera lens, thereby affecting the accuracy of an occlusion object detection result.
For example, in a scene in which a certain background color is black, a black cat appears in the scene at a certain time. At this point, the difference between the foreground and the background of the scene is small. If the black cat blocks the monitoring camera, the blocking object detection mode in the related technology is utilized, and due to the fact that the difference between the foreground and the background is small, the existence of the blocking object can not be accurately determined, and therefore the blocking object detection result is prone to errors.
For another example, in a scene, a large box exists at a distance from the monitoring camera when the background model is established. A little carton of a certain moment is placed in a position department that is nearer apart from surveillance camera machine, and at this moment, the little carton of image that surveillance camera machine gathered coincides with big carton, and this will lead to the unable accurate little carton of discernment collection scene of surveillance camera machine, more can not confirm whether little carton causes the shelter from to surveillance camera machine. Although two cartons exist in the collection scene, the distance between the cartons in the foreground and the background and the monitoring camera cannot be determined by the monitoring function camera, so that the detection result of the shelters determined according to the difference between the foreground and the background is not accurate.
In order to solve the problems that the computation time complexity and the space complexity in the process of detecting the shielding object are high and the accuracy of the detection result of the shielding object is poor in the related technology, the embodiment of the invention provides a method for detecting the shielding object. The method is applied to a monitoring camera. The monitoring camera can comprise a monitoring camera and N TOF cameras, wherein the splicing field angles of the N TOF cameras cover the field angle of the monitoring camera, and are obtained by splicing the field angles of the N TOF cameras. In the method, a depth image and a gray image acquired by N TOF cameras at the current moment are acquired, pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image; for the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of the monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
According to the method provided by the embodiment of the invention, the depth image and the gray image can be acquired by using N TOF cameras included in the monitoring camera, and when the depth image has pixel points with depth values within the depth range to be detected and the confidence coefficient of the depth values larger than the preset confidence coefficient threshold value, the fact that the shielding object exists in front of the monitoring camera is determined. Compared with the related technology, the spliced view angle obtained by splicing the view angles of the N TOF cameras covers the view angle of the monitoring camera, so that whether a shelter exists in the depth range to be detected determined based on the spliced view angle of the N TOF cameras can be accurately detected directly according to the depth value of each pixel point in the depth image and the confidence coefficient of the depth value of each pixel point, namely whether the shelter exists in front of the monitoring camera is accurately detected, the establishment process of an RGB background model in the related technology is avoided, the detection time required by the shelter detection process is effectively shortened, the memory of the monitoring camera is saved, and the calculation time complexity and the space complexity of the shelter detection process are reduced; and because the TOF camera has a large measurement dynamic range and high measurement precision, the accuracy of the detection result of the shielding object is effectively improved.
In the embodiment of the present invention, the monitoring camera may specifically include a monitoring camera, N TOF cameras, and a main control unit. The master control unit may include a processor and a memory, among other things.
The N TOF cameras may be dTOF cameras or indirect Time-of-Flight (ietf) cameras. Taking a dTOF camera as an example, the N TOF cameras may be one or more of a single-point dTOF camera, a linear dTOF camera, and a surface array dTOF camera. In the embodiment of the present invention, the N TOF cameras are not particularly limited. For convenience of description, each TOF camera is taken as a single-point dTOF camera for illustration, and is not limited in any way.
The measuring range of the TOF camera can be several millimeters to several thousand millimeters, and taking a commonly used single-point dTOF camera as an example, the measuring range of the single-point dTOF camera can be generally from 2 millimeters (mm) to 2000mm, and the measuring precision can reach millimeter level. Because the measuring range and the measuring accuracy of the TOF camera are superior to those of the traditional depth camera, the images collected by the N TOF cameras are adopted for detecting the shielding object, and the accuracy of the detection result of the shielding object can be effectively improved.
In the embodiment of the invention, each TOF camera has a corresponding field angle. And splicing the view angles corresponding to the N TOFs to obtain the spliced view angles of the N TOF cameras. The spliced field angle covers the field angle of the monitoring camera included in the monitoring camera.
In an optional embodiment, when the N TOF cameras are one TOF camera, the spliced field angle of the N TOF cameras is the field angle of the TOF camera.
In another optional embodiment, when the N TOF cameras are multiple TOF cameras, the spliced field of view of the N TOF cameras may be obtained by splicing the field of view corresponding to each TOF camera.
For ease of understanding, fig. 1-a, 1-b, 1-c and 1-d are given as examples. Fig. 1-a is a schematic view of a monitoring camera according to an embodiment of the present invention. Fig. 1-b is a schematic diagram of a relationship between a monitored image and a depth image according to an embodiment of the present invention. Fig. 1-c are first schematic diagrams of a splicing viewing angle provided by an embodiment of the present invention, and fig. 1-d are second schematic diagrams of a splicing viewing angle provided by an embodiment of the present invention.
As shown in fig. 1-a, in a surveillance camera, the surroundings of the surveillance camera may comprise a plurality of single point dTOF cameras, i.e. single point dTOF camera 1-single point dTOF camera 4. By deploying the N TOF cameras around the monitoring camera, the spliced field angles of the N TOFs cover the field angle of the monitoring camera. And the splicing field angle is obtained by splicing the field angles of the N TOF cameras. When the splicing field angle covers the field angle of the monitoring camera, corresponding pixel points exist in the images acquired by the N TOF cameras in a plurality of preset areas in the images acquired by the monitoring camera, so that the sheltering object determined based on the depth images acquired by the N TOF cameras and the gray level images can be accurately indicated, and the accuracy of detecting the sheltering object in front of the monitoring camera is improved.
Specifically, the above-mentioned N TOF cameras are single-point dTOF cameras, and are described with reference to fig. 1-b. In fig. 1-b, an image 1 is an image collected by a monitoring camera, that is, a monitoring image, an area 2 is an area with a preset size in the image 1, and a pixel point 3 is each pixel point in a depth image collected by a certain dTOF camera. As shown in fig. 1-b, a region 2 in an image 1 is represented as a pixel point 3 in a depth image acquired by a dTOF camera, that is, the region 2 corresponds to the pixel point 3. Therefore, according to the depth value of the pixel point 3, the monitoring camera can accurately determine whether the shielding object exists in the area corresponding to the area 2 in the collected scene, so that the accuracy of detecting the shielding object is improved.
For ease of understanding, the above-described splicing viewing angles will be described with reference to fig. 1-c and 1-d. In fig. 1-c, the field angle a is the field angle of the monitoring camera and the diameter D is the diameter of the monitoring camera. And the TOF camera 1 and the TOF camera 2 are deployed beside the monitoring camera respectively. The field angle b is the field angle of the TOF camera 1, and the field angle c is the field angle of the TOF camera 2. The spliced angle of view corresponding to the angle of view b and the angle of view c covers the angle of view a.
As shown in fig. 1-d, when the field angle 101 is the field angle of one TOF camera and the field angle 102 is the field angle of another TOF camera, the field angle 103 may be the field angle of the joint of the field angles of the two TOF cameras.
In the embodiment of the present invention, due to the limitation of the deployment positions of different monitoring cameras and each TOF camera, the spliced field angle covering the field angle of the monitoring camera is not completely covered, that is, the spliced field angle covers the field angle of the monitoring camera within an error allowable range, for example, within a preset error range.
The following examples illustrate the present invention.
As shown in fig. 2, fig. 2 is a first schematic flow chart of the obstruction detection method according to the embodiment of the invention. The method is applied to the monitoring camera and specifically comprises the following steps.
Step S201, obtaining a depth image and a grayscale image collected by N TOF cameras at the current time, where pixel points in the depth image and the grayscale image correspond to each other one to one, and a pixel value of each pixel point in the grayscale image represents a confidence of a depth value of a corresponding pixel point in the depth image.
In this step, for each TOF camera in the monitoring camera, the TOF camera may perform image acquisition at the current time to obtain a set of images, i.e., a depth image and a grayscale image. The main control unit in the monitoring camera can acquire the depth image and the gray image acquired at the current moment from each TOF camera.
In an optional embodiment, when the depth images and the grayscale images acquired by the N TOF cameras are acquired, the main control unit in the monitoring camera may actively acquire the depth images and the grayscale images from each TOF camera.
In another optional embodiment, when the depth images and the grayscale images acquired by the N TOF cameras are acquired, each TOF camera may actively transmit the acquired depth images and grayscale images to the main control unit. And the main control unit receives the depth image and the gray image sent by each TOF camera.
In an optional embodiment, the N TOF cameras may include one or more of a single-point dTOF camera, a linear array dTOF camera, and a surface array dTOF camera.
For the sake of understanding, taking a single-point dTOF camera as an example, the single-point dTOF camera directly emits a light pulse, and the depth image is obtained by directly measuring the distance by measuring the time interval between the reflected light pulse and the emitted light pulse. And the pixel value of each pixel point in the depth image represents the distance from an object in the collected scene to the TOF camera and is recorded as a depth value.
The single-point dTOF camera generates the gray-scale image according to the intensity information of the received reflected light pulse.
And pixel points in the depth image and the gray image collected by each TOF camera correspond to each other one by one. Taking a depth image and a grayscale image collected by a certain TOF camera as an example, it is assumed that the size of the depth image is: 60 × 60, the size of the grayscale image collected by the TOF camera is also: 60*60. And the pixel value of each pixel point in the gray-scale image represents the confidence of the depth value of the pixel point corresponding to the pixel point in the depth image. For example, the pixel value of the pixel point corresponding to the 1 st row and the 5 th column in the grayscale image is represented as: and the confidence of the depth value of the pixel point corresponding to the 1 st row and the 5 th column in the depth image.
In an alternative embodiment, the confidence level may be used to indicate the reliability of the depth value, and the confidence level is proportional to the intensity of the received reflected light pulse, i.e. the greater the intensity of the received reflected light pulse, the greater the confidence value, and the greater the reliability of the depth value; the smaller the light intensity of the received reflected light pulse, the smaller the confidence value and the smaller the reliability of the depth value.
In the embodiment of the invention, the transmitted light pulse and the reflected light pulse are not influenced by ambient light, so that the N TOF cameras can normally work indoors and even outdoors under strong light, and the deployment flexibility of the monitoring camera is improved.
In another alternative embodiment, the confidence level may also indicate the size of the object in the captured scene. In the depth image/grayscale image, the confidence level corresponding to the same depth value is proportional to the size of the object in the region corresponding to the depth value in the captured scene.
Because the light pulse is not affected by ambient light, when each TOF camera determines a gray scale image according to the received emitted light pulse, the size of the pixel value of each pixel point in the gray scale image, that is, the size of the confidence coefficient, is mainly affected by the light intensity of the reflected light, the distance from the object in the collection scene to the TOF camera, the material of the object in the collection scene, and other factors. Therefore, the size of the confidence corresponding to the same depth value in the grayscale image can reflect the size of the object in the captured scene to some extent.
For ease of understanding, the description is made with reference to fig. 1-b above as an example. Now, assume that a large-area sheet (denoted as sheet a) is captured in a certain region (denoted as region a) of the image 1 shown in fig. 1-B, where the region a corresponds to the pixel point a in the grayscale image, and a small-area sheet (denoted as sheet B) is captured in another region (denoted as region B) of the image 1, where the region B corresponds to the pixel point B in the grayscale image. If the depth values of the pixel points corresponding to the pixel points A and B in the depth image collected by the TOF camera image are the same or the depth values are within an error allowable range, at the moment, the area of the paper A is obviously larger than that of the paper B, so that reflected light pulses returned by irradiating the paper A are obviously more than emitted light pulses returned by irradiating the paper B, and the pixel value of the pixel point A is larger than that of the pixel point B. That is, the confidence of the depth value of the pixel corresponding to the pixel point a in the depth image is greater than the confidence of the depth value of the pixel corresponding to the pixel point B in the depth image.
Step S202, aiming at the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of a monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
In this step, after the master control unit of the monitoring camera acquires the depth image and the grayscale image acquired by each TOF camera, it may determine whether a pixel point exists in the depth image, where the depth value is within the to-be-detected depth range and the confidence of the depth value is greater than the preset confidence threshold, according to the depth value of each pixel point in the depth image and the confidence of the depth value of each pixel point, that is, the confidence of the corresponding pixel point of each pixel point in the grayscale image. And if so, determining that a shelter exists in front of the monitoring camera.
In an optional embodiment, when the N TOF cameras are one TOF camera, the main control unit may acquire only one group of acquired images acquired by the TOF camera, that is, a depth image and a grayscale image. At this time, when there is a pixel point whose depth value is within the depth range to be detected and whose confidence of the depth value is greater than the preset confidence threshold in the depth image, the main control unit may determine that there is a blocking object in front of the monitoring camera.
In another optional embodiment, when the N TOF cameras are multiple TOF cameras, the main control unit may acquire a group of acquired images acquired by each TOF camera, that is, the main control unit acquires multiple groups of acquired images. At this time, when a pixel point exists in any depth image, the depth value of which is within the range of the depth to be detected, and the confidence degree of which is greater than the preset confidence degree threshold value, the main control unit can determine that a blocking object exists in front of the monitoring camera.
In the embodiment of the present invention, the depth image is calculated by a grayscale image. The depth value of each pixel in the depth image is a 16-bit (bit) number. Wherein, the effective value is the last 12bit, the value range is 0-4095, and the unit is mm. I.e. a measurement range between 0-4095 mm. But due to limitations in the intensity of the emitted light pulses and the pulse frequency, the measurement range of the TOF camera may be 0-2000mm, i.e. the measurement range described above. Specifically, for convenience of understanding, a certain object in a certain scene is taken as an example for explanation, in the scene, if the distance from the object to the TOF camera is longer, the depth value in the depth image is larger, and the corresponding gray value is smaller; if the distance from the object to the TOF camera is shorter, the depth value in the depth image is smaller, and the corresponding gray value is larger. Here, the acquisition of the depth image will not be specifically described.
In the embodiment of the invention, the depth range to be detected can be determined according to the splicing field angle of the N TOF cameras, and can also be set according to the installation position of a monitoring camera, user requirements and the like. Taking the depth range to be detected as the distance from an object to the TOF camera in the acquisition scene as an example, a certain monitoring camera is installed beside a road, if no sheltering object which can cause sheltering exists around the monitoring camera, such as trees, advertising boards and the like, at the moment, the depth range to be detected can be set to be a larger range, such as 10mm-1000 mm. If a shielding object causing shielding exists around the monitoring camera, for example, a tree is near the monitoring camera, branches and leaves of the tree may cause shielding, and at this time, the depth range to be detected may be set to be a smaller range, for example, 2mm to 100 mm. Here, the depth range to be detected is not particularly limited.
In the embodiment of the present invention, when there is a pixel point whose depth value is within the depth range to be detected in the depth image, the main control unit of the monitoring camera may determine that a blocking object exists in front of the monitoring camera. However, the determined occlusion object may include an occlusion object with a relatively small volume, such as dust, and the like, and therefore, the influence of the occlusion object with the relatively small volume on the occlusion object detection result may be eliminated by setting the preset confidence threshold. Specifically, the confidence of the depth value corresponding to each pixel point in the depth image can reflect the size of an object in the acquired scene to a certain extent, so that the greater the confidence of the depth value of the pixel point at the same depth in the depth image is, the greater the obstruction in the acquired scene corresponding to the pixel point may be; when the confidence of the depth value of the pixel point located at the same depth in the depth image is smaller, the blocking object in the collected scene corresponding to the pixel point is probably smaller. Namely, when the preset confidence threshold is larger, the shielding object with relatively smaller volume can be accurately removed, so that the accuracy of shielding object detection is improved.
In the embodiment of the present invention, the depth range to be detected and the preset confidence threshold may be set according to a user requirement, and are not specifically limited herein.
By the method shown in fig. 2, a depth image and a grayscale image can be acquired by using N TOF cameras included in the monitoring camera, and when a pixel point exists in the depth image, where the depth value is in the depth range to be detected and the confidence of the depth value is greater than a preset confidence threshold, it is determined that an obstruction exists in front of the monitoring camera. Compared with the related technology, the spliced view angle obtained by splicing the view angles of the N TOF cameras covers the view angle of the monitoring camera, so that whether a shelter exists in the depth range to be detected determined based on the spliced view angle of the N TOF cameras can be accurately detected directly according to the depth value of each pixel point in the depth image and the confidence coefficient of the depth value of each pixel point, namely whether the shelter exists in front of the monitoring camera is accurately detected, the establishment process of an RGB background model in the related technology is avoided, the detection time required by the shelter detection process is effectively shortened, the memory of the monitoring camera is saved, and the calculation time complexity and the space complexity of the shelter detection process are reduced; and because the TOF camera has a large measurement dynamic range and high measurement precision, the accuracy of the detection result of the shielding object is effectively improved.
In the embodiment of the present invention, when the field angles corresponding to the N TOF cameras are spliced, in order to enable the spliced field angle to cover the field angle of the monitoring camera as much as possible, the spliced field angle may be larger than the field angle of the monitoring camera. This makes it possible for all TOF cameras to capture images including locations that are not captured by the monitoring camera. Therefore, after the main control unit determines whether the shielding object exists in front of the monitoring camera, in order to further improve the accuracy of detecting the shielding object, the main control unit can determine whether the shielding object exists in front of the monitoring camera.
In an optional embodiment, the main control unit may splice the acquired images collected by each TOF according to the field angle of the monitoring camera, so as to obtain a spliced image corresponding to the field angle of the monitoring camera. And if a pixel point with the depth value within the to-be-detected depth range is detected in the spliced image, and the confidence corresponding to the pixel point is greater than a preset confidence threshold, determining that a shielding object exists between the monitoring cameras. The specific determination method may refer to the detection method of the front shielding object of the monitoring camera, and is not specifically described here.
In another optional embodiment, after determining that a blocking object exists in front of the monitoring camera, the main control unit may determine whether the blocking object is within the field angle range of the monitoring camera according to the field angle of the monitoring camera. If yes, determining that a shielding object exists in front of the monitoring camera. If not, determining that no shielding object exists in front of the monitoring camera.
In the embodiment of the invention, because the coverage range of the spliced field angle may be larger than that of the field angle of the monitoring camera, when the main control unit detects the shielding object according to the acquired depth image and gray image collected by each TOF camera, the main control unit can ignore the parts of the depth image and the gray image which exceed the coverage range of the field angle of the monitoring camera, so that the accuracy of the detection result of the shielding object is improved.
In an alternative embodiment, according to the method shown in fig. 2, an embodiment of the present invention further provides a method for detecting an obstruction. As shown in fig. 3, fig. 3 is a second schematic flow chart of the obstruction detection method according to the embodiment of the invention. The method comprises the following steps.
Step S301, obtaining a depth image and a gray image collected by N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image.
Step S302, aiming at the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a blocking object exists in front of a monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
The above steps S301 to S302 are the same as the above steps S201 to S202.
Step S303, if no pixel point with the depth value within the depth range to be detected exists in the depth image acquired by each TOF camera, determining that no shielding object exists in front of the monitoring camera.
In this step, when there is no pixel point whose depth value is within the depth range to be detected in the depth image, the main control unit of the monitoring camera may determine that there is no blocking object in the depth range to be detected in front of the monitoring camera at the current moment.
In an optional embodiment, when the N TOF cameras are multiple TOF cameras, the main control unit acquires multiple sets of acquired images. At this time, if there is no pixel point with a depth value within the depth range to be detected in each depth image, the main control unit may determine that there is no blocking object in front of the monitoring camera.
In the embodiment of the invention, when the pixel point with the depth value within the depth range to be detected does not exist in the depth image, the fact that the shielding object does not exist in front of the monitoring camera is directly determined, so that the time for detecting the shielding object can be effectively shortened, the efficiency for detecting the shielding object is improved, and the complexity of the calculation time and the space in the process of detecting the shielding object is reduced.
In an alternative embodiment, according to the method shown in fig. 2, an embodiment of the invention further provides a method for detecting an obstruction. As shown in fig. 4, fig. 4 is a third schematic flow chart of the obstruction detection method according to the embodiment of the invention. The method comprises the following steps.
Step S401, synchronizing the image acquisition time of the monitoring camera and the image acquisition time of the N TOF cameras.
In an optional embodiment, the synchronization of the acquisition time may be implemented by the main control unit sending a time synchronization signal to each TOF camera and each monitoring camera through a GPIO.
In the embodiment of the present invention, since the data collected by the monitoring camera may be video data, when the image collection time is synchronized, an image collection period of the monitoring camera may be synchronized with an image collection period of each TOF camera. And will not be described in detail herein.
In the embodiment of the invention, by synchronizing the image acquisition time of the monitoring camera and the image acquisition time of each TOF camera, the monitoring camera and each TOF camera can acquire images at the same time, so that the identification of the shielding object in the monitoring image acquired by the monitoring camera at the later stage is facilitated.
Step S402, obtaining a depth image and a gray image collected by N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image.
Step S403, for the depth image acquired by each TOF camera, if a pixel point with a depth value within a to-be-detected depth range exists in the depth image acquired by the TOF camera and the confidence of the depth value of the pixel point is greater than a preset confidence threshold, determining that a blocking object exists in front of the monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
The above-described steps S402 to S403 are the same as the above-described steps S201 to S202.
In an alternative embodiment, according to the method shown in fig. 2, an embodiment of the present invention further provides a method for detecting an obstruction. As shown in fig. 5, fig. 5 is a fourth schematic flow chart of the obstruction detection method according to the embodiment of the invention. The method comprises the following steps.
Step S501, obtaining a depth image and a gray image collected by N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image.
Step S502, aiming at the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of a monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
The above steps S501 to S502 are the same as the above steps S201 to S202.
Step S503, acquiring a monitoring image acquired by the monitoring camera at the current moment.
When each TOF camera acquires the depth image and the gray image at the current moment, the monitoring camera in the monitoring camera can acquire the monitoring image at the current moment. The main control unit in the monitoring camera acquires the monitoring image from the monitoring camera. The acquisition of the monitoring image may refer to the above-mentioned acquisition modes of the depth image and the grayscale image, and will not be specifically described here.
The step S503 is executed simultaneously with the step S501.
And step S504, when the fact that the shielding object exists in front of the monitoring camera is determined, the monitoring images collected at the same time are discarded.
In this step, when it is determined that a blocking object exists in front of the monitoring camera, that is, when the blocking object is detected in the depth image, the blocking object inevitably exists in the monitoring image acquired at the same time as the depth image, and at this time, the main control unit in the monitoring camera can discard the monitoring image.
In the embodiment of the present invention, in addition to discarding the monitoring image, the main control unit may also report an error for the monitoring image. For example, the master control unit may identify and count persons included in the monitored images. The existing main control unit detects that the monitoring camera is shielded at a certain moment, and then the main control unit can report error marks on the monitoring image acquired at the moment, so that personnel identification and personnel counting are not carried out on the monitoring image acquired at the moment.
In an optional embodiment, the discarding the monitoring images acquired at the same time in step S504 may specifically be represented as:
and if the depth values in the depth images acquired by the N TOF cameras are in the depth range to be detected and the number of pixel points of which the confidence degrees corresponding to the depth values are greater than the preset confidence degree threshold value is greater than the preset number threshold value, discarding the monitoring images acquired at the same moment.
In the embodiment of the present invention, when the depth image detects a blocking object, according to the number of the pixel points in the depth image whose depth values are within the depth range to be detected and whose confidence degrees are greater than the preset confidence threshold, the main control unit of the surveillance camera may determine the size of the blocking object according to the number of the pixel points occupied by the detected blocking object in the depth image. Therefore, when the depth value in the depth image is within the depth range to be detected and the number of the pixel points of which the confidence degrees of the depth values are greater than the preset confidence degree threshold value is greater than the preset number threshold value, the main control unit of the monitoring camera can determine that the size of the shielding object is relatively large, at the moment, the area occupied by the shielding object in the monitoring image is also relatively large, and the main control unit can discard the monitoring image acquired at the same moment, so that the effectiveness of the monitoring image is improved.
In an alternative embodiment, when it is determined that a blocking object exists in front of the monitoring camera, the main control unit may determine the blocking object in the monitoring image according to the monitoring image acquired at the same time, that is, determine what the blocking object of the teapot is, such as the above-mentioned objects like bees and leaves.
In another optional embodiment, the main control unit may further determine whether the monitoring camera is shielded for a long time according to the plurality of depth images and the grayscale images acquired within the preset time period, and if a shielding object exists in each depth image within the preset time period and the shielding objects are the same, determine that the monitoring camera is shielded for a long time, at this time, the main control unit may control the monitoring camera to temporarily stop working based on the GPIO, and resume the working of the monitoring camera until the shielding object is not detected.
In an alternative embodiment, according to the method shown in fig. 2, an embodiment of the present invention further provides a method for detecting an obstruction. As shown in fig. 6, fig. 6 is a fifth flowchart of the obstruction detection method according to the embodiment of the invention. The method comprises the following steps.
Step S601, obtaining a depth image and a gray image collected by N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image.
Step S602, aiming at the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of a monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
The above steps S601 to S602 are the same as the above steps S201 to S202.
Step S603, when it is determined that an obstruction exists in front of the monitoring camera, generating a prompt message for the obstruction.
The main control unit of the monitoring camera can send the prompt message to a monitoring person, if the prompt message is sent to the rear-end server, the rear-end server displays the prompt message on a display interface, and therefore the monitoring person can determine that a shielding object exists in front of the monitoring camera.
For ease of understanding, the above-described blocking object detection method will be described below with reference to FIGS. 7-a and 7-b. Fig. 7-a is a schematic view of a first structure of a monitoring camera according to an embodiment of the present invention. Fig. 7-b is a signaling diagram of an obstruction detection process based on the surveillance camera shown in fig. 7-a.
In the figure 7-a, a monitoring camera and a dTOF camera 1-dTOF camera n are in communication connection with a main control unit through GPIO. The monitoring camera and each dTOF camera are in communication connection through GPIO. The spliced field angle of the dTOF camera 1-dTOF camera n covers the field angle of the monitoring camera.
In step S701, the main control unit sends time synchronization signals to the monitoring camera and the TOF camera, respectively.
The TOF cameras are all TOF cameras included in the monitoring camera, namely a dTOF camera 1-dTOF camera n shown in figure 7-a.
The master control unit respectively sends time synchronization signals to the monitoring camera and the dTOF camera 1-dTOF camera n through the GPIO, and synchronization of image acquisition time between the monitoring camera and the dTOF camera 1-dTOF camera n is achieved.
Step S702, the monitoring camera acquires a monitoring image at the current moment.
Step S703, the main control unit obtains the monitoring image collected by the monitoring camera.
The main control unit can acquire the monitoring image from the monitoring camera through the GPIO and store the acquired monitoring image.
Step 704, the TOF camera acquires a depth image and a grayscale image at the current time.
And pixel points in the depth image and the gray image collected by each TOF camera correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the pixel point.
Step 705, the main control unit acquires a depth image and a gray image acquired by the TOF camera.
The main control unit can obtain a depth image and a grayscale image from each dTOF camera, namely dTOF camera 1-dTOF camera n through GPIO.
The above step S702 is executed simultaneously with step S704, and step S703 may be executed simultaneously with step S705. After performing the above step S705, the main control unit may perform step S706 or step S707.
Step S706, when there is no pixel point whose depth value is within the depth range to be detected in the obtained depth image, the main control unit determines that there is no blocking object in front of the monitoring camera.
Step S707, when a pixel point exists in the acquired depth image, where the depth value is within the to-be-detected depth range and the confidence of the depth value is greater than a preset confidence threshold, the main control unit determines that an occlusion exists in front of the monitoring camera.
After performing the above step S707, the main control unit may perform step S708.
Step S708, when the main control unit determines that there is a blocking object in front of the monitoring camera, the monitoring image acquired at the same time is discarded.
In addition, when the main control unit determines that the shielding object exists in front of the monitoring camera, the main control unit can also generate the prompt message or close the monitoring camera until the shielding object does not exist in front of the monitoring camera, and the monitoring camera is turned on again.
Based on the same inventive concept, according to the method for detecting the shielding object provided by the embodiment of the invention, the embodiment of the invention also provides a device for detecting the shielding object. As shown in fig. 8, fig. 8 is a schematic structural diagram of a blocking object detecting device according to an embodiment of the present invention. The device is applied to a monitoring camera, the monitoring camera can comprise a monitoring camera and N TOF cameras, the splicing field angles of the N TOF cameras cover the field angle of the monitoring camera, the splicing field angle is obtained by splicing the field angles of the N TOF cameras, and the device comprises the following modules.
The first obtaining module 801 is configured to obtain a depth image and a grayscale image acquired by N TOF cameras at a current time, where pixel points in the depth image and the grayscale image correspond to each other one to one, and a pixel value of each pixel point in the grayscale image represents a confidence of a depth value of a corresponding pixel point in the depth image;
a first determining module 802, configured to determine, for a depth image acquired by each TOF camera, that a blocking object exists in front of a monitoring camera if a pixel point with a depth value within a to-be-detected depth range exists in the depth image acquired by the TOF camera and a confidence of the depth value of the pixel point is greater than a preset confidence threshold; wherein the depth range to be detected is determined based on the stitching field angle.
Optionally, the blocking object detecting device may further include:
and the second determining module is used for determining that no shielding object exists in front of the monitoring camera if pixel points with depth values within the depth range to be detected do not exist in the depth image acquired by each TOF camera.
Optionally, the blocking object detecting device may further include:
and the synchronization module is used for synchronizing the image acquisition time of the monitoring camera and the image acquisition time of the N TOF cameras before acquiring the depth image and the gray image acquired by the N TOF cameras at the current moment.
Optionally, the blocking object detecting device may further include:
the second acquisition module is used for acquiring a monitoring image acquired by the monitoring camera at the current moment;
and the discarding module is used for discarding the monitoring image acquired at the same moment when the shielding object exists in front of the monitoring camera.
Optionally, the discarding module may be specifically configured to discard the monitoring image acquired at the same time if the depth value in the depth image acquired by the N TOF cameras is within the depth range to be detected, and the number of the pixel points whose confidence degrees corresponding to the depth values are greater than the preset confidence degree threshold is greater than the preset number threshold.
Optionally, in the depth image/grayscale image, the size of the confidence corresponding to the same depth value is directly proportional to the size of the object in the region corresponding to the depth value in the acquired scene.
Optionally, the blocking object detecting device may further include:
the generating module is used for generating a prompt message aiming at the obstruction when the obstruction is determined to exist in front of the monitoring camera.
Optionally, the N TOF cameras include one or more of a single-point dTOF camera, a linear array dTOF camera, and a surface array dTOF camera.
By the device provided by the embodiment of the invention, the depth image and the gray image can be acquired by using N TOF cameras included in the monitoring camera, and when the depth image has pixel points with depth values within the depth range to be detected and the confidence coefficient of the depth values larger than the preset confidence coefficient threshold value, the fact that the shielding object exists in front of the monitoring camera is determined. Compared with the related technology, the spliced view angle obtained by splicing the view angles of the N TOF cameras covers the view angle of the monitoring camera, so that whether a shelter exists in the depth range to be detected determined based on the spliced view angle of the N TOF cameras can be accurately detected directly according to the depth value of each pixel point in the depth image and the confidence coefficient of the depth value of each pixel point, namely whether the shelter exists in front of the monitoring camera is accurately detected, the establishment process of an RGB background model in the related technology is avoided, the detection time required by the shelter detection process is effectively shortened, the memory of the monitoring camera is saved, and the calculation time complexity and the space complexity of the shelter detection process are reduced; and because the TOF camera has a large measurement dynamic range and high measurement precision, the accuracy of the detection result of the shielding object is effectively improved.
Based on the same inventive concept, according to the shielding object detection method provided by the embodiment of the invention, the embodiment of the invention also provides a monitoring camera. As shown in fig. 9, fig. 9 is a schematic diagram of a second structure of the monitoring camera according to the embodiment of the present invention. The surveillance camera may include a surveillance camera 901, a main control unit 902, and N TOF cameras 903; the splicing field angle of the N TOF cameras 903 covers the field angle of the monitoring camera 901, and the splicing field angle is obtained by splicing the field angles of the N TOF cameras 903;
the monitoring camera 901 is configured to acquire a monitoring image at the current time;
the N TOF cameras 903 are configured to acquire a depth image and a grayscale image at a current time, where pixel points in the depth image and the grayscale image correspond to each other one to one, and a pixel value of each pixel point in the grayscale image represents a confidence of a depth value of a corresponding pixel point in the depth image;
the main control unit 902 is configured to obtain a monitoring image, a depth image, and a grayscale image, and determine that a blocking object exists in front of the monitoring camera if a pixel point with a depth value within a to-be-detected depth range exists in the depth image acquired by each TOF camera and a confidence of the depth value of the pixel point is greater than a preset confidence threshold for the depth image acquired by the TOF camera; wherein the depth range to be detected is determined based on the stitching field angle.
The monitoring camera 901 may be an RGB camera, an IR camera, or a thermal imaging camera.
The main control unit 902 may synchronize image acquisition time of the monitoring camera 901 and the N TOF cameras 903 through a GPIO.
According to the monitoring camera provided by the embodiment of the invention, the N TOF cameras included in the monitoring camera can be used for acquiring the depth image and the gray image, and when the depth image has the pixel point of which the depth value is in the depth range to be detected and the confidence coefficient of the depth value is greater than the preset confidence coefficient threshold value, the fact that the shielding object exists in front of the monitoring camera is determined. Compared with the related technology, the spliced view angle obtained by splicing the view angles of the N TOF cameras covers the view angle of the monitoring camera, so that whether a shelter exists in the depth range to be detected determined based on the spliced view angle of the N TOF cameras can be accurately detected directly according to the depth value of each pixel point in the depth image and the confidence coefficient of the depth value of each pixel point, namely whether the shelter exists in front of the monitoring camera is accurately detected, the establishment process of an RGB background model in the related technology is avoided, the detection time required by the shelter detection process is effectively shortened, the memory of the monitoring camera is saved, and the calculation time complexity and the space complexity of the shelter detection process are reduced; and because the TOF camera has a large measurement dynamic range and high measurement precision, the accuracy of the detection result of the shielding object is effectively improved.
The Memory may include a Random Access Memory (RAM) or a Non-Volatile Memory (NVM), such as at least one disk Memory. Optionally, the memory may also be at least one memory device located remotely from the processor.
The processor may be a general-purpose processor, and includes a Central Processing Unit (CPU), an Advanced Reduced Instruction Set processor (ARM), a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), a System On Chip (SOC), and the like.
It is noted that, herein, relational terms such as first and second, and the like may be used solely to distinguish one entity or action from another entity or action without necessarily requiring or implying any actual such relationship or order between such entities or actions. Also, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, an element defined by the phrase "comprising an … …" does not exclude the presence of other identical elements in a process, method, article, or apparatus that comprises the element.
All the embodiments in the present specification are described in a related manner, and the same and similar parts among the embodiments may be referred to each other, and each embodiment focuses on the differences from the other embodiments. In particular, for embodiments such as the apparatus and the monitoring camera, since they are substantially similar to the method embodiments, the description is relatively simple, and for the relevant points, reference may be made to the partial description of the method embodiments.
The above description is only for the preferred embodiment of the present invention, and is not intended to limit the scope of the present invention. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention shall fall within the protection scope of the present invention.

Claims (12)

1. The method for detecting the sheltering object is applied to a monitoring camera, the monitoring camera comprises a monitoring camera and N time of flight (TOF) cameras, the spliced field angle of the N TOF cameras covers the field angle of the monitoring camera, and the spliced field angle is obtained by splicing the field angles of the N TOF cameras, and the method comprises the following steps:
acquiring a depth image and a gray image acquired by the N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence of the depth value of the corresponding pixel point in the depth image;
for the depth image acquired by each TOF camera, if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by the TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value, determining that a shielding object exists in front of the monitoring camera; wherein the depth range to be detected is determined based on the stitching field angle.
2. The method of claim 1, further comprising:
and if pixel points with depth values within the depth range to be detected do not exist in the depth image acquired by each TOF camera, determining that no shielding object exists in front of the monitoring camera.
3. The method according to claim 1, before acquiring the depth images and the grayscale images acquired by the N TOF cameras at the current time, further comprising:
and synchronizing the image acquisition time of the monitoring camera and the image acquisition time of the N TOF cameras.
4. The method of claim 1, further comprising:
acquiring a monitoring image acquired by the monitoring camera at the current moment;
and when the fact that the shielding object exists in front of the monitoring camera is determined, discarding the monitoring image acquired at the same time.
5. The method of claim 4, wherein the step of discarding the monitoring images acquired at the same time comprises:
and if the depth values in the depth images acquired by the N TOF cameras are in the depth range to be detected and the number of pixel points of which the confidence degrees corresponding to the depth values are greater than the preset confidence degree threshold value is greater than the preset number threshold value, discarding the monitoring images acquired at the same moment.
6. The method of claim 1, wherein the confidence level for the same depth value in the depth image/grayscale image is proportional to the size of the object in the area of the captured scene corresponding to the depth value.
7. The method of claim 1, further comprising:
and when the fact that an obstruction exists in front of the monitoring camera is determined, generating a prompt message aiming at the obstruction.
8. The method of claim 1, wherein the N TOF cameras comprise one or more of a single point direct time-of-flight dTOF camera, a linear array dTOF camera, and an area array dTOF camera.
9. The utility model provides a shelter detection device, its characterized in that is applied to surveillance camera, surveillance camera includes surveillance camera head and N time of flight TOF camera, the concatenation angle of vision of N TOF camera covers the angle of vision of surveillance camera head, the concatenation angle of vision is obtained for the angle of vision of concatenation N TOF camera, the device includes:
the first acquisition module is used for acquiring a depth image and a gray image acquired by the N TOF cameras at the current moment, wherein pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the depth value of the corresponding pixel point in the depth image;
the first determining module is used for determining that a shielding object exists in front of the monitoring camera if a pixel point with a depth value within a depth range to be detected exists in the depth image acquired by each TOF camera and the confidence coefficient of the depth value of the pixel point is greater than a preset confidence coefficient threshold value; wherein the depth range to be detected is determined based on the stitching field angle.
10. A monitoring camera is characterized by comprising a monitoring camera, a main control unit and N time of flight (TOF) cameras; the splicing field angles of the N TOF cameras cover the field angle of the monitoring camera, and the splicing field angle is obtained by splicing the field angles of the N TOF cameras;
the monitoring camera is used for acquiring a monitoring image at the current moment;
the N TOF cameras are used for acquiring a depth image and a gray image at the current moment, pixel points in the depth image and the gray image correspond to each other one by one, and the pixel value of each pixel point in the gray image represents the confidence coefficient of the depth value of the corresponding pixel point in the depth image;
the main control unit is used for acquiring the monitoring image, the depth image and the gray level image, and determining that a shielding object exists in front of the monitoring camera if pixel points with depth values within a depth range to be detected exist in the depth image acquired by each TOF camera and the confidence coefficient of the depth values of the pixel points is greater than a preset confidence coefficient threshold value aiming at the depth image acquired by each TOF camera; wherein the depth range to be detected is determined based on the stitching field angle.
11. The surveillance camera as recited in claim 10, wherein the surveillance camera is a red green blue RGB camera, an infrared IR camera, a thermal imaging camera.
12. The surveillance camera according to claim 10, wherein the master control unit synchronizes image acquisition time of the surveillance camera with the N TOF cameras through a general purpose input/output port GPIO.
CN202011460219.1A 2020-12-11 2020-12-11 Blocking object detection method and device and monitoring camera Pending CN112561874A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011460219.1A CN112561874A (en) 2020-12-11 2020-12-11 Blocking object detection method and device and monitoring camera

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011460219.1A CN112561874A (en) 2020-12-11 2020-12-11 Blocking object detection method and device and monitoring camera

Publications (1)

Publication Number Publication Date
CN112561874A true CN112561874A (en) 2021-03-26

Family

ID=75062461

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011460219.1A Pending CN112561874A (en) 2020-12-11 2020-12-11 Blocking object detection method and device and monitoring camera

Country Status (1)

Country Link
CN (1) CN112561874A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114639040A (en) * 2022-03-14 2022-06-17 哈尔滨博敏科技开发有限公司 Monitoring video analysis system and method based on Internet of things
CN115019157A (en) * 2022-07-06 2022-09-06 武汉市聚芯微电子有限责任公司 Target detection method, device, equipment and computer readable storage medium

Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321302A (en) * 2008-07-08 2008-12-10 浙江大学 Three-dimensional real-time acquisition system based on camera array
US20110141306A1 (en) * 2009-12-10 2011-06-16 Honda Motor Co., Ltd. Image capturing device, method of searching for occlusion region, and program
US20110242286A1 (en) * 2010-03-31 2011-10-06 Vincent Pace Stereoscopic Camera With Automatic Obstruction Removal
US20120087573A1 (en) * 2010-10-11 2012-04-12 Vinay Sharma Eliminating Clutter in Video Using Depth Information
WO2013035612A1 (en) * 2011-09-09 2013-03-14 日本電気株式会社 Obstacle sensing device, obstacle sensing method, and obstacle sensing program
US20150227784A1 (en) * 2014-02-07 2015-08-13 Tata Consultancy Services Limited Object detection system and method
US20160295193A1 (en) * 2013-12-24 2016-10-06 Softkinetic Sensors Nv Time-of-flight camera system
CN107509059A (en) * 2017-09-21 2017-12-22 江苏跃鑫科技有限公司 Camera lens occlusion detection method
WO2018086050A1 (en) * 2016-11-11 2018-05-17 深圳市大疆创新科技有限公司 Depth map generation method and unmanned aerial vehicle based on this method
CN109639896A (en) * 2018-12-19 2019-04-16 Oppo广东移动通信有限公司 Block object detecting method, device, storage medium and mobile terminal
CN109635723A (en) * 2018-12-11 2019-04-16 讯飞智元信息科技有限公司 A kind of occlusion detection method and device
CN109767467A (en) * 2019-01-22 2019-05-17 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110852312A (en) * 2020-01-14 2020-02-28 深圳飞科机器人有限公司 Cliff detection method, mobile robot control method, and mobile robot
CN111766606A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Image processing method, device and equipment of TOF depth image and storage medium

Patent Citations (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101321302A (en) * 2008-07-08 2008-12-10 浙江大学 Three-dimensional real-time acquisition system based on camera array
US20110141306A1 (en) * 2009-12-10 2011-06-16 Honda Motor Co., Ltd. Image capturing device, method of searching for occlusion region, and program
US20110242286A1 (en) * 2010-03-31 2011-10-06 Vincent Pace Stereoscopic Camera With Automatic Obstruction Removal
US20120087573A1 (en) * 2010-10-11 2012-04-12 Vinay Sharma Eliminating Clutter in Video Using Depth Information
WO2013035612A1 (en) * 2011-09-09 2013-03-14 日本電気株式会社 Obstacle sensing device, obstacle sensing method, and obstacle sensing program
US20160295193A1 (en) * 2013-12-24 2016-10-06 Softkinetic Sensors Nv Time-of-flight camera system
US20150227784A1 (en) * 2014-02-07 2015-08-13 Tata Consultancy Services Limited Object detection system and method
WO2018086050A1 (en) * 2016-11-11 2018-05-17 深圳市大疆创新科技有限公司 Depth map generation method and unmanned aerial vehicle based on this method
CN107509059A (en) * 2017-09-21 2017-12-22 江苏跃鑫科技有限公司 Camera lens occlusion detection method
CN109635723A (en) * 2018-12-11 2019-04-16 讯飞智元信息科技有限公司 A kind of occlusion detection method and device
CN109639896A (en) * 2018-12-19 2019-04-16 Oppo广东移动通信有限公司 Block object detecting method, device, storage medium and mobile terminal
CN109767467A (en) * 2019-01-22 2019-05-17 Oppo广东移动通信有限公司 Image processing method, device, electronic equipment and computer readable storage medium
CN110852312A (en) * 2020-01-14 2020-02-28 深圳飞科机器人有限公司 Cliff detection method, mobile robot control method, and mobile robot
CN111766606A (en) * 2020-06-19 2020-10-13 Oppo广东移动通信有限公司 Image processing method, device and equipment of TOF depth image and storage medium

Non-Patent Citations (7)

* Cited by examiner, † Cited by third party
Title
"基于RGB-D多通道特征的行人检测", 《福州大学学报( 自然科学版)》, vol. 43, no. 6, pages 746 - 752 *
ANTOINE VANDERSCHUEREN ET AL.: "How semantic and geometric information mutually reinforce each other in ToF object localization", 《ARXIV:2008.12002V1 [CS.CV]》, pages 1 - 8 *
FABIO REMONDINO: "《飞行时间测距成像相机》", 国防工业出版社, pages: 207 - 208 *
GEORGE XU ET AL.: "Sensitivity study for object reconstruction using a network of time-of-flight depth sensors", 《2015 IEEE INTERNATIONAL CONFERENCE ON ROBOTICS AND AUTOMATION (ICRA)》, pages 3335 - 3340 *
乔欣等: "ToF相机的有效深度数据提取与校正算法研究", 《智能科学与技术学报》, vol. 2, no. 1, pages 72 - 79 *
孙哲等: "基于置信度的TOF与双目系统深度数据融合", 《北京航空航天大学学报》, vol. 44, no. 8, pages 1764 - 1771 *
李红波等: "动态变换背景帧的虚实遮挡处理方法", 《计算机工程与设计》, vol. 36, no. 1, pages 227 - 231 *

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114639040A (en) * 2022-03-14 2022-06-17 哈尔滨博敏科技开发有限公司 Monitoring video analysis system and method based on Internet of things
CN114639040B (en) * 2022-03-14 2023-01-17 广东正艺技术有限公司 Monitoring video analysis system and method based on Internet of things
CN115019157A (en) * 2022-07-06 2022-09-06 武汉市聚芯微电子有限责任公司 Target detection method, device, equipment and computer readable storage medium
CN115019157B (en) * 2022-07-06 2024-03-22 武汉市聚芯微电子有限责任公司 Object detection method, device, equipment and computer readable storage medium

Similar Documents

Publication Publication Date Title
US11694404B2 (en) Estimating a condition of a physical structure
CN112561874A (en) Blocking object detection method and device and monitoring camera
JP4169282B2 (en) Photogrammetry system and photogrammetry method
WO2017054700A1 (en) Fire disaster monitoring method and apparatus
CN105844240A (en) Method and device for detecting human faces in infrared temperature measurement system
KR102230552B1 (en) Device For Computing Position of Detected Object Using Motion Detect and Radar Sensor
CN103471512A (en) Glass plate width detection system based on machine vision
CN110244314A (en) One kind " low slow small " target acquisition identifying system and method
CN108814452A (en) Sweeping robot and its disorder detection method
KR20200018553A (en) Smart phone, vehicle, camera with thermal imaging sensor and display and monitoring method using the same
CN111368615A (en) Violation building early warning method and device and electronic equipment
US20110157360A1 (en) Surveillance system and method
CN112180353A (en) Target object confirmation method and system and storage medium
KR102270858B1 (en) CCTV Camera System for Tracking Object
Carmichael et al. Dataset and Benchmark: Novel Sensors for Autonomous Vehicle Perception
KR100766995B1 (en) 3 dimension camera module device
CN110839131A (en) Synchronization control method, synchronization control device, electronic equipment and computer readable medium
TWI633497B (en) Method for performing cooperative counting with aid of multiple cameras, and associated apparatus
KR20170077623A (en) Object counting system and method
JP6892134B2 (en) Measurement system, measurement method and measurement program
KR20200085418A (en) Targeted automatic identification and traceable artificial intelligence devices and methods
CN115830288A (en) Pinhole camera shooting intelligent terminal detection technology based on ToF imaging
JPS59123371A (en) Angle of view detector of television camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination