CN109685062B - Target detection method, device, equipment and storage medium - Google Patents

Target detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN109685062B
CN109685062B CN201910001609.3A CN201910001609A CN109685062B CN 109685062 B CN109685062 B CN 109685062B CN 201910001609 A CN201910001609 A CN 201910001609A CN 109685062 B CN109685062 B CN 109685062B
Authority
CN
China
Prior art keywords
area
shooting
target
areas
camera
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910001609.3A
Other languages
Chinese (zh)
Other versions
CN109685062A (en
Inventor
郝祁
韩方舟
兰功金
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Southwest University of Science and Technology
Original Assignee
Southwest University of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Southwest University of Science and Technology filed Critical Southwest University of Science and Technology
Priority to CN201910001609.3A priority Critical patent/CN109685062B/en
Publication of CN109685062A publication Critical patent/CN109685062A/en
Application granted granted Critical
Publication of CN109685062B publication Critical patent/CN109685062B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/255Detecting or recognising potential candidate objects based on visual cues, e.g. shapes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target detection method, a target detection device, target detection equipment and a storage medium. The method comprises the following steps: determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears; if the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera; and if the shooting area is a non-target area, performing target detection through the full-area camera. Through the technical scheme, the object to be detected is flexibly detected and tracked by utilizing the multi-camera array.

Description

Target detection method, device, equipment and storage medium
Technical Field
The embodiment of the invention relates to the technical field of computer vision, in particular to a target detection method, a target detection device, target detection equipment and a storage medium.
Background
The multi-target visual detection and tracking technology is one of research hotspots in the current computer vision field, and has been widely applied to the fields of automatic driving, pedestrian detection, face recognition, security monitoring and the like. At present, various camera devices are adopted in the multi-target visual detection and tracking technology, wherein a multi-camera array has a plurality of advantages, and as a plurality of cameras participate in image acquisition, larger information quantity can be provided so as to improve the accuracy of target detection and tracking, but the problems of complex operation, slow decision, low efficiency and the like are caused.
In a multi-camera array, the mosaic of photographed images between individual cameras is complex, and when a target disappears from the picture of a certain camera unit, it is necessary to accurately predict in which camera unit picture the target will appear, and switch the camera units. When a camera with lower resolution is used for target recognition and detection, the requirements of resolution of a picture and target detection precision may not be met, and when a camera with higher resolution is used for monitoring, a large number of shot pictures need to be processed, and the operation cost is higher. In the existing target detection method, a plurality of camera units are difficult to cooperate efficiently, and objects to be detected cannot be detected and tracked flexibly.
Disclosure of Invention
The invention provides a target detection method, a device, equipment and a storage medium, which are used for flexibly detecting and tracking an object to be detected by utilizing a multi-camera array.
In a first aspect, an embodiment of the present invention provides a target detection method, including:
determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
If the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera;
and if the shooting area is a non-target area, performing target detection through the full-area camera.
Further, the determining that the corresponding shooting area is the target area or the non-target area according to the pictures of the two or more shooting areas includes:
taking a central shooting area as a current area, taking a picture of the central shooting area as a picture of the current area, inputting a target recognition algorithm, and determining the current area as a target area or a non-target area, wherein the central shooting area is a shooting area positioned at a central position in the two or more shooting areas;
if the current area is a target area, taking the picture of the shooting area adjacent to the current area as the picture of a new current area and inputting the picture into a target recognition algorithm;
and if the current area is a non-target area, taking the picture of the shooting area on the diagonal line of the current area as the picture of the new current area to input a target recognition algorithm until all the shooting areas are determined to be target areas or non-target areas.
Further, the alternately performing object detection by the camera corresponding to the shooting area and the camera of the whole area includes:
fusing the monitoring confidence difference value of the camera corresponding to the shooting area and the monitoring confidence difference value of the camera corresponding to the whole area with the Bhattacharyya coefficient to obtain a fusion value;
if the fusion value is smaller than or equal to a preset threshold value, performing target detection on the shooting area through the full-area camera;
and if the fusion value is greater than a preset threshold value, performing target detection on the shooting area through a camera corresponding to the shooting area.
Further, the alternately performing object detection by the camera corresponding to the shooting area and the camera of the whole area further includes:
acquiring shooting resolution of the full-area camera on the object to be detected;
when the shooting resolution is greater than or equal to a preset resolution, performing target detection on the shooting area through the full-area camera;
and when the shooting resolution is smaller than the preset resolution, detecting the target of the shooting area through a camera corresponding to the shooting area.
Further, after the target detection by the whole-area camera if the shooting area is a non-target area, the method further includes:
Monitoring a shooting picture of the full-area camera by using a background difference algorithm;
if the object to be detected is detected in the shooting picture, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
Further, the method further comprises:
when an object to be detected enters a visual angle superposition area of the shooting area, a camera corresponding to the shooting area for performing target detection on the object to be detected by the original provides probability distribution of the position of the object to be detected;
according to the probability distribution
Sequencing shooting areas possibly entered by the object to be detected from high to low;
and sequentially inputting the possibly entered shooting areas into a target recognition algorithm according to the sorting result until the shooting areas where the object to be detected enters are determined.
Further, the method further comprises:
and after the interval preset period, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
In a second aspect, an embodiment of the present invention provides an object detection apparatus, including:
the first target area determining module is used for determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
The alternate detection module is used for alternately detecting targets through the cameras corresponding to the shooting areas and the cameras of the whole area if the shooting areas are target areas, and the resolution of the cameras corresponding to the shooting areas is higher than that of the cameras of the whole area;
and the full-area shooting module is used for carrying out target detection through the full-area camera if the shooting area is a non-target area.
In a third aspect, an embodiment of the present invention provides an apparatus, including:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the target detection method as described in the first aspect.
In a fourth aspect, an embodiment of the present invention further provides a computer readable storage medium, on which a computer program is stored, which when executed by a processor implements the object detection method according to the first aspect.
The embodiment of the invention provides a target detection method, a target detection device, target detection equipment and a storage medium. The method comprises the following steps: determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears; if the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera; and if the shooting area is a non-target area, performing target detection through the full-area camera. Through the technical scheme, the object to be detected is flexibly detected and tracked by utilizing the multi-camera array.
Drawings
FIG. 1 is a flowchart of a target detection method according to a first embodiment of the present invention;
FIG. 2 is a schematic diagram of a multi-camera array according to a first embodiment of the present invention;
fig. 3 is a schematic diagram of a multi-camera array shooting area according to a first embodiment of the present invention;
fig. 4 is a flowchart of a target detection method according to a second embodiment of the present invention;
FIG. 5A is a schematic diagram of a current region according to a second embodiment of the present invention;
fig. 5B is a schematic diagram of a current region adjacent shooting region according to a second embodiment of the present invention;
fig. 5C is a schematic diagram of a shooting area on a diagonal line of a current area according to a second embodiment of the present invention;
fig. 6 is a schematic diagram of implementation of a target detection method according to a second embodiment of the present invention;
fig. 7 is a schematic structural diagram of a target detection device according to a third embodiment of the present invention;
fig. 8 is a schematic hardware structure of a device according to a fourth embodiment of the present invention.
Detailed Description
The invention is described in further detail below with reference to the drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the structures related to the present invention are shown in the drawings.
Example 1
Fig. 1 is a flowchart of a target detection method according to a first embodiment of the present invention, where the present embodiment is applicable to a case of performing target detection on an object to be detected through a multi-camera array, and in particular, the present invention is applicable to various fields of military tracking, civil target tracking/monitoring, automatic driving target recognition, sports broadcasting, etc. that are shot through the multi-camera array and need to detect a target from a shot picture. The object detection method may be performed by an object detection device, which may be implemented in software and/or hardware and integrated in the apparatus. Further, the apparatus includes, but is not limited to: industrial personal computers, desktop computers, and integrated intelligent electronic devices.
Fig. 2 is a schematic diagram of a multi-camera array according to an embodiment of the invention. The multi-camera array in this embodiment refers to a multi-focal-length camera array, that is, a multi-camera array composed of a wide-angle camera and a plurality of fixed-arrangement long-focal-length cameras, where the wide-angle camera refers to a camera with a shorter focal length and a larger viewing angle, and covers a wider shooting area, but has a relatively smaller scene and a relatively lower resolution in a shot picture; a tele camera is a camera with a longer focal length and a smaller viewing angle, and covers a smaller shooting area, and has higher resolution for shooting pictures of scenes in a small range. As shown in fig. 2, the multi-camera array includes a wide-angle camera 100, i.e., a full-area camera, whose position is designated as C (0, 0), and tele cameras 101 (only 1 is designated in fig. 2) distributed on both sides of the wide-angle camera 100 in three rows and six columns, i.e., cameras corresponding to each photographing area, whose position coordinates are designated as C (1, 1) to C (3, 6), respectively.
Fig. 3 is a schematic diagram of a multi-camera array shooting area according to an embodiment of the invention. As shown in fig. 3, the wide-angle camera 100 is used for full-area shooting, the shooting range of which covers the shooting area of all the tele cameras (as shown by a bold black frame in fig. 3), and a plurality of shooting areas are arranged in the shooting range, wherein the shooting areas are distributed in three rows and six columns, each shooting area corresponds to one tele camera, the shooting areas of the tele cameras arranged adjacently are partially overlapped (as shown by the overlapped square blocks in fig. 3), and the area of the overlapped part can be determined in the multi-camera array deployment and calibration process according to actual requirements.
It should be noted that, in this embodiment, the determination of the target area, the target detection of the target area alternately, and the target detection of the non-target area are described in detail, but the (multi) target recognition and tracking algorithm is not limited, and the multi-camera array is calibrated in terms of viewing angle, focal length, and the like when deployed.
Referring to fig. 1, the target detection method specifically includes the following steps:
s110, determining the corresponding shooting area as a target area or a non-target area according to the pictures of two or more shooting areas, wherein the target area is an area where an object to be detected possibly appears.
Specifically, each photographing region is first determined as a target region or a non-target region. The images in all the areas covered by the multi-camera array are divided into two types, the target area refers to an area (such as a road, a square and the like) where an object to be detected is likely to appear, the non-target area refers to an area (such as a forest, sky and the like) where the object to be detected is unlikely to appear, high-precision real-time monitoring is usually required for the target area, and a low-operation-cost algorithm (such as a background differential algorithm) can be adopted for monitoring for the non-target area. The object to be detected may be a pedestrian, a vehicle, a traffic sign, etc., and the pictures of the shooting areas (such as the areas represented by the three rows and six columns and the squares with overlapping portions shown in fig. 3) are sequentially input to the target recognition algorithm to perform image detection and classification, so as to determine whether the object to be detected is a target area where the object to be detected may appear.
S120, if the shooting area is the target area, executing a step S130; if not, step S140 is performed.
Specifically, step S110 has determined whether all the shooting areas are target areas in sequence, and for different areas, different cameras or algorithms are adopted to perform target detection, and if the shooting areas are target areas, step S130 is executed; if not, step S140 is performed.
And S130, alternately detecting targets by using the cameras corresponding to the shooting areas and the cameras of the whole area, wherein the resolution of the cameras corresponding to the shooting areas is higher than that of the cameras of the whole area.
Specifically, for the target area, when the object to be detected appears, the object to be detected needs to be accurately and rapidly detected and tracked, and the detailed information of the shooting picture is captured. The present embodiment exemplarily sets a default setting for target detection with a corresponding camera for a target area to provide a clearer photographed picture; for non-target areas, target detection is performed by using a full-area camera by default, so that the detection cost is reduced. Along with the movement of the object to be detected, according to the direction of the object to be detected entering or leaving the current target area, the shooting area to which the object to be detected is to be transferred can be detected, so that the target area is predicted and updated, and the object to be detected can be tracked in time. On this basis, the target area is alternately detected by using the camera corresponding to the shooting area and the full-area camera. For example, when a shooting area which is originally a non-target area is changed into a target area, a whole-area camera is switched to a camera corresponding to the target area to perform target detection; for the target area, if the shooting resolution is not high, switching to a full-area camera for target detection, for example, if the outline of the object to be detected is clear and the background is single, if a vehicle is monitored on a mountain road, switching to the full-area camera for target detection for the target area where the vehicle is located; when the detailed information of the vehicle needs to be acquired, such as checking license plates, distinguishing vehicle models and identifying vehicle owners, the camera corresponding to the target area where the vehicle is located is switched to perform target detection, so that higher resolution and detection accuracy are provided.
And S140, performing target detection through the full-area camera.
Specifically, if the shooting area is a non-target area, target detection is performed through a full-area camera so as to provide an integral shooting picture in a wider shooting range, and splicing and calibration of images of a plurality of shooting areas are avoided; meanwhile, whether an object to be detected appears in a non-target area can be monitored by using an algorithm (background difference algorithm) with lower operand, if the object to be detected appears, the target area is updated, and target detection is alternately carried out on the target area according to actual requirements by using a camera corresponding to the shooting area and a full-area camera.
The number of the photographing regions is plural, and each photographing region corresponds to a camera (tele camera) corresponding to one photographing region, and the resolution of the camera corresponding to the photographing region is higher than that of the full-area camera. The shooting range of the full-area camera covers the shooting areas of the cameras corresponding to all the shooting areas, global information can be obtained without calibrating or splicing images in the shooting process, a large number of shooting pictures are not required to be subjected to image processing, and the operation amount can be reduced; the camera corresponding to the shooting area can provide more details of shooting pictures when necessary, and carry out detailed operation processing on more image pixels, so that the accuracy of detecting and tracking the object to be detected is improved. Therefore, in the target detection method of the embodiment, different cameras are respectively adopted for target detection on the target area and the non-target area through the multi-camera array, so that the detection precision and the detection cost can be both considered.
According to the target detection method provided by the embodiment of the invention, the corresponding shooting areas are determined to be target areas or non-target areas according to pictures of two or more shooting areas; if the shooting area is a target area, alternately detecting targets by a camera corresponding to the shooting area and a full-area camera; and if the shooting area is a non-target area, performing target detection through the full-area camera. According to the technical scheme, two different cameras are used for alternately detecting targets in the target area according to actual demands, a whole-area camera with simple image processing and low operand and a target detection algorithm are used for monitoring the non-target area, detection cost and detection precision are comprehensively considered, and the purpose of flexibly detecting and tracking an object to be detected by utilizing the multi-camera array is achieved.
Example two
Fig. 4 is a flowchart of a target detection method according to a second embodiment of the present invention, where specific optimization is performed based on the foregoing embodiments, and determination and updating of a target area and a non-target area, and alternate target detection of a camera corresponding to a shooting area and a full-area camera are described, and technical details not described in detail in the present embodiment can be found in any of the foregoing embodiments.
Specifically, referring to fig. 4, the method specifically includes the following steps:
s210, taking a central shooting area as a current area, and taking a picture of the central shooting area as a picture of the current area, wherein the central shooting area is a shooting area positioned at a central position in the two or more shooting areas, and inputting a target recognition algorithm.
Specifically, in the initial starting stage of the multi-camera array, if the frames of each shooting area are used as target areas and all the shooting frames are input to the target detection algorithm, a great computational burden is caused to image processing and target detection, so that the detection efficiency is greatly reduced, and if only the shooting frames of the all-area camera are input to the target detection algorithm, the object to be detected may not be clearly detected due to the limitation of resolution. In the starting stage of the multi-camera array, the cameras corresponding to the shooting areas are sequentially activated according to a certain rule, and pictures monitored by the cameras corresponding to the shooting areas are sequentially input into the target detection algorithm according to a certain sequence to judge whether the shooting areas are target areas or not, so that the target areas can be rapidly determined, and the detection efficiency is improved.
Specifically, it is first determined whether or not a center photographing region, which is a photographing region located at a center position among a plurality of photographing regions, is a target region. Fig. 5A is a schematic diagram of a current region of a shooting region according to a second embodiment of the present invention. Referring to fig. 5A, the present embodiment exemplarily employs 1 full-area camera and 18 cameras corresponding to the photographing areas, the photographing range of the full-area camera covers the photographing areas of the cameras corresponding to all the photographing areas, and the 18 photographing areas may be divided into three rows and six columns, each photographing area corresponding to the 1 camera corresponding to the photographing area, respectively. The 18 shooting areas are geometrically equally divided (equally divided into left and right parts), the shooting area with each equally divided area at the center position, namely, the shooting area corresponding to C (2, 2) and C (2, 5) (shown by hatching in fig. 5A) is taken as the current area, and the picture of the current area is input into a target recognition algorithm to determine whether the current area is a target area.
S220, executing step S230 if the current area is the target area; if not, step S240 is performed.
S230, taking the picture of the shooting area adjacent to the current area as the picture of the new current area to input a target recognition algorithm.
Specifically, if the current region is the target region, the picture of the shooting region adjacent to the current region is continuously input as a picture of a new current region into the target recognition algorithm to determine whether the adjacent shooting region is the target region.
Fig. 5B is a schematic diagram of a current region adjacent shooting region according to a second embodiment of the present invention. The direction in which the object to be detected enters and leaves the current area (target area) can be obtained according to the target tracking algorithm, so that the pictures of the adjacent shooting areas (shown by hatching in fig. 5B) in this direction are input to the target recognition algorithm. It should be noted that, in this embodiment, the adjacent shooting areas are exemplarily limited to the shooting areas in the horizontal or vertical direction of the current area, and since the current area is the target area, the probability that the objects to be detected appear in the adjacent shooting areas in the horizontal or vertical direction is greater, which is beneficial to quickly positioning the objects to be detected.
S240, taking the picture of the shooting area on the diagonal line of the current area as the picture of the new current area to input a target recognition algorithm.
Specifically, if the current region is a non-target region, the picture of the photographed region on the diagonal of the current region is input to a target recognition algorithm to determine whether the photographed region on the diagonal is a target region.
Fig. 5C is a schematic diagram of a shooting area on a diagonal line of a current area according to a second embodiment of the present invention. If the current region is a non-target region where no object to be detected appears, a screen of a photographing region (shown by hatching in fig. 5C) in the diagonal direction thereof is input to the object recognition algorithm. It should be noted that, the probability that the object to be detected appears in the shooting area adjacent to the current area in the horizontal or vertical direction is relatively small, and at this time, the shooting area on the diagonal line is firstly judged, so that the searching radius can be enlarged, and the possible target area of the object to be detected can be positioned quickly.
S250, whether all shooting areas are determined to be target areas or non-target areas or not, and if so, executing a step S260; if not, step S220 is performed.
Further, after the adjacent shooting area or the shooting area on the diagonal line is used as a new current area and whether the new current area is a target area is judged, the search range is further enlarged, and the new shooting area is continuously selected for judgment by the same method until all the shooting areas are determined to be the target area or the non-target area. If all the shooting areas are determined to be the target areas or the non-target areas, step S260 is executed to perform target detection on the target areas and the non-target areas respectively, and if all the shooting areas are not determined yet, steps S220-S250 are repeatedly executed to determine a new current area.
And S260, alternately detecting targets in the target area through the camera corresponding to the shooting area and the full-area camera, and detecting targets in the non-target area through the full-area camera.
Specifically, if all the shooting areas are determined to be target areas or non-target areas, the target areas are alternately detected by the cameras corresponding to the shooting areas and the full-area cameras, and the non-target areas are detected by the full-area cameras.
Further, the alternately performing object detection on the object area by using the camera corresponding to the shooting area and the full-area camera includes:
fusing the monitoring confidence difference value of the camera corresponding to the shooting area and the monitoring confidence difference value of the camera corresponding to the whole area with the Bhattacharyya coefficient to obtain a fusion value;
if the fusion value is smaller than or equal to a preset threshold value, performing target detection on the shooting area through the full-area camera;
and if the fusion value is greater than a preset threshold value, performing target detection on the shooting area through a camera corresponding to the shooting area.
Specifically, the target detection for the target area and the non-target area involves two kinds of switching between cameras, including: when the picture input by the full-area camera to the target recognition algorithm is still enough to ensure the precision of target detection and tracking, the shooting of the target area is switched to shooting by the full-area camera, or when the target area becomes a non-target area, the shooting is switched to shooting by the full-area camera; when the shooting purpose is changed, the picture details need to be analyzed or enlarged, and higher shooting resolution is required, shooting of the non-target area is switched to target detection by using a camera corresponding to the shooting area.
Specifically, in the case of switching to the use of the full-area camera to perform target detection, it is necessary to evaluate whether the target detection algorithm can provide sufficient detection and tracking accuracy when processing the low-resolution picture captured by the full-area camera, and if so, the capturing of the target area is handed over to the full-area camera, and the captured picture of the target area by the full-area camera is input into the target recognition algorithm to be detected and tracked. For example, the vehicles in the mountain roads are detected and tracked, the outline of the object to be detected is clear, the background is single, the low-resolution picture shot by the full-area camera is enough to ensure that the position of the vehicles can be accurately detected and tracked, the target area where the vehicles are located can be switched to the target detection by the full-area camera, and the camera corresponding to the shooting area is not required to be used for obtaining the high-resolution shooting picture and splicing and calibrating the pictures of all the shooting areas. The specific evaluation method combines the difference in confidence of target detection for two types of camera pictures with the color difference described by the Bhattacharyya coefficient.
Specifically, the histogram of the camera shot image corresponding to the shot region is denoted as p, the histogram of the whole-region camera shot image is denoted as q, and two histograms are assumedAre m-level, and Bhattacharyya coefficients D (p, q) areRecording the difference between the confidence coefficient of carrying out target detection on the shooting picture of the all-area camera and the confidence coefficient of carrying out target detection on the shooting picture of the camera corresponding to the shooting area as D confidence The evaluation method is specifically expressed as:
if the conditions are satisfied: alpha D (p, q) +beta D confidence ≤T handover The shooting of the target area is handed over to the whole area camera; if not, the following conditions are satisfied: alpha D (p, q) +beta D confidence ≤T handover And transferring the shooting of the target area to the camera corresponding to the shooting area. Wherein, the confidence coefficient refers to the confidence coefficient of the target recognition algorithm detecting the object to be detected from the shooting picture, alpha and beta are weight values set according to experience, alpha D (p, q) +beta D confidence T is the fusion value of the confidence difference and the Bhattacharyya coefficient handover For empirically set preset thresholds, alpha, beta, T handover Can be set according to actual requirements.
Further, the alternately performing object detection by the camera corresponding to the shooting area and the camera of the whole area further includes:
acquiring shooting resolution of the full-area camera on the object to be detected;
When the shooting resolution is greater than or equal to a preset resolution, performing target detection on the shooting area through the full-area camera;
and when the shooting resolution is smaller than the preset resolution, detecting the target of the shooting area through a camera corresponding to the shooting area.
Specifically, when the non-target area is to switch the full-area camera to the camera corresponding to the shooting area, the shooting task with higher requirements on shooting resolution is usually performed, for example, not only the license plate needs to be detected by the vehicle, but also the face detection needs to be performed by the pedestrian, and after the target area where the object to be detected is located is monitored by the full-area camera, the camera corresponding to the target area is switched to perform target detection.
For example, the object to be detected may be photographed by the whole-area camera by default, so as to obtain the photographing resolution of the whole-area camera, or the photographing resolution may be calculated according to the performance parameters of the whole-area camera, and compared with a preset resolution, where the preset resolution may be set according to the actual requirements of different tasks. For example, the shooting resolution of the full-area camera on the object to be detected is 960P, and for a vehicle tracking task, the preset resolution is 960P, so that the full-area camera can meet the requirements, and the vehicle tracking can be performed on the target area through the full-area camera; however, for the license plate detection task, if the preset resolution is 1080P, the target detection needs to be performed on the target area by switching to the camera corresponding to the shooting area. It should be noted that the shooting resolution and the preset resolution may also be expressed in a hierarchical form, such as super-definition, high definition, and the like.
Further, after the target detection by the whole-area camera if the shooting area is a non-target area, the method further includes:
monitoring a shooting picture of the full-area camera by using a background difference algorithm;
if the object to be detected is detected in the shooting picture, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
Specifically, for a non-target area, a full-area camera is used for target detection, and a background difference algorithm with low calculation cost is used for monitoring the picture of the full-area line shooting machine, wherein the background difference algorithm is a background difference algorithm introducing normal distribution, and specifically comprises the following steps:
wherein mu t =ρI t +(1-ρ)μ t-1d=|(I tt ) I, k is an empirical value, which can be set according to actual requirements, and in this embodiment, the initial value of k=2.5 and t=0 is +.>μ 0 、σ 0 To empirically set values, I t Intensity of pixel point at time t, mu t 、σ t Is a normally distributed parameter. According to the above formula, whether the pixel point is foreground (foreground) or background (background) at the time t can be judged, so as to detect whether the object to be detected appears in the non-target area.
Further, after all the shooting areas are determined to be the target areas or the non-target areas, the target areas and the non-target areas are continuously updated to ensure the accuracy of the target detection algorithm. For a picture taken at a certain moment, if the pixel point p (x, y) with the intensity of I (x, y) satisfies the following conditions:
Judging that the pixel point p (x, y) belongs to the target area, if not, judging that the pixel point p (x, y) belongs to the non-target area, wherein r neighbor Is an empirical value representing a neighborhood distance that can be determined to be the same object to be detected, and is positively correlated with the size of the object to be detected, e.g., the object to be detected is a vehicle, r neighbor 20, which means that r which can be used as a discrimination radius and is smaller than 20 exists to discriminate whether the pixel point is a target area of an object to be detected; the object to be detected is a person, r neighbor 10, r smaller than 10, is used as the judgment radius to judge whether the pixel point is the target area of the object to be detected, P target A set of pixel points representing a target area of an object to be detected.
Further, when the object to be detected enters the visual angle coincidence region of the shooting region, a camera corresponding to the shooting region for performing target detection on the object to be detected by the original provides probability distribution of the position of the object to be detected;
sequencing shooting areas possibly entered by the object to be detected from high to low according to the probability distribution;
and sequentially inputting the possibly entered shooting areas into a target recognition algorithm according to the sorting result until the shooting areas where the object to be detected enters are determined.
Specifically, when the object to be detected enters the overlapping portion, the camera or the full-area camera corresponding to the shooting area which originally monitors the object to be detected provides probability distribution of the position of the object to be detected, so that which shooting area the object to be detected is about to enter is predicted, the shooting areas which are likely to enter are ordered according to the probability distribution from high to low, the shooting areas are sequentially input into a target recognition algorithm according to the ordering result, until the object to be detected enters which shooting area is determined, and therefore real-time tracking of the object to be detected is achieved. Alternatively, the target area where the object to be detected is about to enter may be predicted according to the gaussian distribution, and after determining the shooting area where the object to be detected enters, the camera corresponding to the area or the full-area camera may be used to detect the object to be detected alternately.
S270, after the preset period, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
Specifically, each time the object to be detected is detected in the non-target area by using the background difference algorithm, or after each interval of the preset period T, all the shooting areas are redetermined as the target area or the non-target area, so as to avoid that the object to be detected cannot be detected accurately in time due to the movement of the position of the object to be detected and the error of the background difference algorithm of the non-target area, and the redetermining method is the same as that of steps S210-S250. The preset period T may be set according to actual requirements, and may be a fixed value (e.g. 10 minutes) or a variable value. The implementation is For example, the preset period T is preferably set to be changed or reset iteratively according to the reason causing the period to end. Specifically, if an object to be monitored appears in the non-target area, resetting the preset period T to an initial value T 0 (e.g. T 0 10 minutes), otherwise the preset period T increases with an increase in the number of times that all the photographing areas are redefined as target areas or non-target areas, i.e., the first interval T 0 After which the target area and the non-target area are redefined, followed by an interval (1+a) T 0 Then the target area and the non-target area are redefined, and then the interval (1+2a) T is formed 0 The target and non-target areas are then redefined, and so on. The preset period T is specifically expressed as:
wherein a is the growth rate of the preset period, T 0 And a can be set according to experience or actual requirements, n represents the last reset to the initial value T 0 Thereafter, the non-target area does not present the number of consecutive cycles of the object to be detected.
It should be noted that, in this embodiment, the multi-camera array is used to dynamically perform real-time shooting and target detection on all shooting areas, so that the position movement of the object to be detected and the updating of the target area and the non-target area are fully considered, the data volume of the shooting picture input to the target detection algorithm is reduced, the detection precision is ensured, the detection cost is reduced, and the detection efficiency is improved.
On the basis of the above embodiment, fig. 6 is a schematic diagram illustrating implementation of the target detection method according to the second embodiment of the present invention. As shown in fig. 6, the target detection method of the present embodiment is mainly divided into five parts, namely, determining a target area, updating the target area, monitoring a non-target area by using a background difference algorithm, alternately performing target detection on the target area by using different cameras, iterating a preset period, and resetting, which are circularly performed to realize real-time and continuous shooting and detection. The updating of the target area includes updating according to probability distribution, updating when the object to be detected is detected in the non-target area or updating at intervals of preset period, which may occur at any time in the target detection method, and the execution sequence of the other parts is not limited in this embodiment.
According to the target detection method provided by the second embodiment of the invention, optimization is performed on the basis of the second embodiment, cameras corresponding to all shooting areas are activated sequentially according to a certain rule, pictures monitored by the cameras corresponding to all the shooting areas are input into a target detection algorithm sequentially, and objects to be detected are positioned rapidly and target areas are determined; the target detection is carried out by selecting proper cameras for the target area and the non-target area respectively, so that the detection precision and the detection cost are both considered; the detection and tracking precision of the target detection algorithm on the low-resolution picture shot by the full-area camera is evaluated, the camera corresponding to the shooting area of the target area is switched to the full-area camera, the detection precision is ensured, the calculated amount of image processing is reduced, and the detection cost is reduced; according to whether the shooting resolution is higher than the preset resolution, shooting of the full-area camera is switched to the corresponding camera, so that the detection precision is improved, and efficient cooperation of the multi-camera array is realized; the dynamic updating of the target area and the non-target area is realized by updating the target area according to the probability distribution, and determining the corresponding shooting area as the target area or the non-target area according to the pictures of two or more shooting areas after the non-target area detects the object to be detected or the interval preset period.
Example III
Fig. 7 is a block diagram of a target detection apparatus according to a third embodiment of the present invention. The object detection device provided in this embodiment includes:
a first target area determining module 310, configured to determine, according to pictures of two or more shooting areas, that a corresponding shooting area is a target area or a non-target area, where the target area is an area where an object to be detected may appear;
the alternation detection module 320 is configured to alternately perform target detection by using a camera corresponding to the shooting area and a full-area camera if the shooting area is a target area, where the resolution of the camera corresponding to the shooting area is higher than the resolution of the full-area camera;
the full-area detection module 330 is configured to perform object detection by using the full-area camera if the shooting area is a non-object area.
According to the target detection device provided by the third embodiment of the invention, the first target area determining module determines that the corresponding shooting area is the target area or the non-target area according to the pictures of two or more shooting areas; if the shooting area is a target area, alternately detecting the target by using a camera corresponding to the shooting area and a full-area camera through an alternate detection module; and if the shooting area is a non-target area, performing target detection by using the full-area camera through the full-area module. Through the technical scheme, comprehensive consideration of detection cost and detection precision is realized, and the multi-camera array is utilized for comprehensive target detection.
On the basis of the above embodiment, the target area determining module 310 includes:
a current region determining unit, configured to input a target recognition algorithm with a central shooting region as a current region and a picture of the central shooting region as a picture of the current region, and determine the current region as a target region or a non-target region, where the central shooting region is a shooting region located at a central position in the two or more shooting regions;
an adjacent shooting area activating unit, configured to input, if the current area is a target area, a picture of the current area adjacent to the shooting area as a picture of a new current area into a target recognition algorithm;
and the diagonal shooting area activating unit is used for inputting a picture of the shooting area on the diagonal of the current area as a picture of a new current area into a target recognition algorithm until all the shooting areas are determined to be target areas or non-target areas if the current area is the non-target area.
On the basis of the above embodiment, the alternation detection module 320 further includes:
the fusion unit is used for fusing the monitoring confidence coefficient difference value of the camera corresponding to the shooting area and the full-area camera with the Bhattacharyya coefficient to obtain a fusion value;
The full-area camera activation unit is used for detecting targets of the shooting areas through the full-area camera if the fusion value is smaller than or equal to a preset threshold value;
and the first area camera activation unit is used for detecting targets of the shooting areas through the cameras corresponding to the shooting areas if the fusion value is larger than a preset threshold value.
Further, the alternation detection module 320 further includes:
a shooting resolution obtaining unit, configured to obtain shooting resolution of the full-area camera on the object to be detected;
a second area camera activation unit, configured to detect a target in the shooting area by the full area camera when the shooting resolution is greater than or equal to a preset resolution;
and a third area camera activation unit, when the shooting resolution is smaller than the preset resolution, performing target detection on the shooting area through the camera corresponding to the shooting area.
Further, the device further comprises:
the background difference monitoring module is used for monitoring the shooting picture of the full-area camera by using a background difference algorithm;
and the second target area determining module is used for determining the corresponding shooting area as a target area or a non-target area according to the pictures of two or more shooting areas again if the object to be detected is detected in the shooting pictures.
Further, the device further comprises:
the probability distribution calculation module is used for providing probability distribution of the position of the object to be detected by a camera corresponding to the shooting area for carrying out target detection on the object to be detected by the original when the object to be detected enters the visual angle superposition area of the shooting area;
the sequencing module is used for sequencing the shooting areas possibly entered by the object to be detected from high to low according to the probability distribution;
and the shooting area determining module is used for sequentially inputting the shooting areas possibly entering into a target recognition algorithm according to the sequencing result until the shooting area where the object to be detected enters is determined.
Further, the second target area determining module is further configured to:
and after the interval preset period, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
The interface device provided in the third embodiment of the present invention may be used to execute the target detection method provided in any of the foregoing embodiments, and has corresponding functions and beneficial effects.
Example IV
Fig. 8 is a schematic hardware structure of a device according to a fourth embodiment of the present invention. As shown in fig. 8, an apparatus provided in this embodiment includes: a processor 410 and a storage 420. The processor in the device may be one or more, for example, a processor 410 in fig. 8, and the processor 410 and the memory device 420 in the device may be connected by a bus or otherwise, for example, by a bus connection in fig. 8.
The one or more programs are executed by the one or more processors 410 to cause the one or more processors to implement the target detection method as described in any of the above embodiments.
The storage 420 in the apparatus is used as a computer readable storage medium, and may be used to store one or more programs, such as a software program, a computer executable program, and modules, such as program instructions/modules corresponding to the target detection method in the embodiment of the present invention (for example, the modules in the target detection apparatus shown in fig. 7 include the first target area determining module 310, the alternate detection module 320, and the full area detection module 330). The processor 410 executes various functional applications of the device and data processing by running software programs, instructions and modules stored in the storage 420, i.e. implements the object detection method in the above-described method embodiments.
The storage device 420 mainly includes a storage program area and a storage data area, wherein the storage program area can store an operating system and at least one application program required by functions; the storage data area may store data or the like created according to the use of the device (such as a picture of a photographing area, an object detection algorithm-related code, or the like in the above-described embodiment). In addition, the storage 420 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the storage 420 may further include memory remotely located with respect to the processor 410, which may be connected to the device via a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And, when one or more programs included in the above-described apparatus are executed by the one or more processors 410, the programs perform the following operations:
determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
if the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera;
and if the shooting area is a non-target area, performing target detection through the full-area camera.
The apparatus according to the present embodiment belongs to the same inventive concept as the target detection method according to the above embodiment, and technical details not described in detail in the present embodiment can be seen in any of the above embodiments, and the present embodiment has the same advantages as those of executing the target detection method.
On the basis of the above-described embodiments, the present embodiment further provides a computer-readable storage medium having stored thereon a computer program which, when executed by an object detection apparatus, implements the object detection method in any of the above-described embodiments of the present invention, the method comprising:
Determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
if the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera;
and if the shooting area is a non-target area, performing target detection through the full-area camera.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present invention is not limited to the operations of the target detection method described above, but may also perform the related operations in the target detection method provided in any embodiment of the present invention, and has corresponding functions and beneficial effects.
From the above description of embodiments, it will be clear to a person skilled in the art that the present invention may be implemented by means of software and necessary general purpose hardware, but of course also by means of hardware, although in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present invention may be embodied essentially or in a part contributing to the prior art in the form of a software product, which may be stored in a computer readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a random access Memory (Random Access Memory, RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, etc., and include several instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the object detection method according to the embodiments of the present invention.
Note that the above is only a preferred embodiment of the present invention and the technical principle applied. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, while the invention has been described in connection with the above embodiments, the invention is not limited to the embodiments, but may be embodied in many other equivalent forms without departing from the spirit or scope of the invention, which is set forth in the following claims.

Claims (8)

1. A method of detecting an object, comprising:
determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
if the shooting area is a target area, alternately detecting targets by using a camera corresponding to the shooting area and a full-area camera, wherein the resolution of the camera corresponding to the shooting area is higher than that of the full-area camera;
if the shooting area is a non-target area, performing target detection through the whole-area camera;
When an object to be detected enters a visual angle superposition area of the shooting area, a camera corresponding to the shooting area for performing target detection on the object to be detected by the original provides probability distribution of the position of the object to be detected;
sequencing shooting areas possibly entered by the object to be detected from high to low according to the probability distribution;
sequentially inputting the possibly entered shooting areas into a target recognition algorithm according to the sorting result until the shooting areas where the object to be detected enters are determined;
after the target detection by the whole-area camera if the shooting area is a non-target area, the method further comprises:
monitoring a shooting picture of the full-area camera by using a background difference algorithm;
if the object to be detected is detected in the shooting picture, determining the corresponding shooting area as a target area or a non-target area according to the pictures of two or more shooting areas;
the background differential algorithm is a background differential algorithm introducing normal distribution, and specifically expressed as follows:
wherein mu t =ρI t +(1-ρ)μ t-1 ,σ t 2 =d 2 ρ+(1-ρ)σ t-1 2 ,d=|(I tt ) I, k is an empirical value, which can be set according to actual requirements, in this embodiment, k=2.5 is set, and the initial value at t=0 is μ 0 =I 0 σ 0 2 ,μ 0 、σ 0 To empirically set values, I t Intensity of pixel point at time t, mu t 、σ t Is a normally distributed parameter; according to the above formula, whether the pixel point is foreground (foreground) or background (background) at the time t can be judged, so as to detect whether the object to be detected appears in the non-target area.
2. The method according to claim 1, wherein the determining that the corresponding photographing region is the target region or the non-target region according to the pictures of the two or more photographing regions includes:
taking a central shooting area as a current area, taking a picture of the central shooting area as a picture of the current area, inputting a target recognition algorithm, and determining the current area as a target area or a non-target area, wherein the central shooting area is a shooting area positioned at a central position in the two or more shooting areas;
if the current area is a target area, taking the picture of the shooting area adjacent to the current area as the picture of a new current area and inputting the picture into a target recognition algorithm;
and if the current area is a non-target area, taking the picture of the shooting area on the diagonal line of the current area as the picture of the new current area to input a target recognition algorithm until all the shooting areas are determined to be target areas or non-target areas.
3. The method of claim 1, wherein the alternately performing object detection by the camera corresponding to the photographing region and the full-area camera comprises:
fusing the monitoring confidence difference value of the camera corresponding to the shooting area and the monitoring confidence difference value of the camera corresponding to the whole area with the Bhattacharyya coefficient to obtain a fusion value;
if the fusion value is smaller than or equal to a preset threshold value, performing target detection on the shooting area through the full-area camera;
and if the fusion value is greater than a preset threshold value, performing target detection on the shooting area through a camera corresponding to the shooting area.
4. The method of claim 3, wherein the alternately performing object detection by the camera corresponding to the photographing region and the full-area camera, further comprises:
acquiring shooting resolution of the full-area camera on the object to be detected;
when the shooting resolution is greater than or equal to a preset resolution, performing target detection on the shooting area through the full-area camera;
and when the shooting resolution is smaller than the preset resolution, detecting the target of the shooting area through a camera corresponding to the shooting area.
5. The method as recited in claim 1, further comprising:
and after the interval preset period, determining the corresponding shooting area as a target area or a non-target area according to the pictures of the two or more shooting areas.
6. An object detection apparatus, comprising:
the first target area determining module is used for determining the corresponding shooting areas as target areas or non-target areas according to pictures of two or more shooting areas, wherein the target areas are areas where an object to be detected possibly appears;
the alternating detection module is used for alternately detecting targets through the cameras corresponding to the shooting areas and the cameras of the whole area if the shooting areas are target areas, and the resolution of the cameras corresponding to the shooting areas is higher than that of the cameras of the whole area;
the whole-area shooting module is used for carrying out target detection through the whole-area camera if the shooting area is a non-target area;
the probability distribution calculation module is used for providing probability distribution of the position of the object to be detected by a camera corresponding to the shooting area for carrying out target detection on the object to be detected by the original when the object to be detected enters the visual angle superposition area of the shooting area;
The sequencing module is used for sequencing the shooting areas possibly entered by the object to be detected from high to low according to the probability distribution;
the shooting area determining module is used for sequentially inputting the shooting areas possibly entering into a target recognition algorithm according to the sorting result until the shooting area where the object to be detected enters is determined;
the background difference monitoring module is used for monitoring the shooting picture of the full-area camera by using a background difference algorithm;
the second target area determining module is used for determining the corresponding shooting area as a target area or a non-target area according to the pictures of two or more shooting areas again if the object to be detected is detected in the shooting pictures;
the background differential algorithm is a background differential algorithm introducing normal distribution, and specifically expressed as follows:
wherein mu t =ρI t +(1-ρ)μ t-1 ,σ t 2 =d 2 ρ+(1-ρ)σ t-1 2 ,d=|(I tt ) I, k is an empirical value, which can be set according to actual requirements, in this embodiment, k=2.5 is set, and the initial value at t=0 is μ 0 =I 0 σ 0 2 ,μ 0 、σ 0 To empirically set values, I t Intensity of pixel point at time t, mu t 、σ t Is a normally distributed parameter; according to the above formula, whether the pixel point is foreground (foreground) or background (background) at the time t can be judged, so as to detect whether the object to be detected appears in the non-target area.
7. An apparatus, comprising:
one or more processors;
a memory for storing one or more programs;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the target detection method of any of claims 1-5.
8. A computer-readable storage medium, on which a computer program is stored, characterized in that the program, when being executed by a processor, implements the object detection method according to any one of claims 1-5.
CN201910001609.3A 2019-01-02 2019-01-02 Target detection method, device, equipment and storage medium Active CN109685062B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910001609.3A CN109685062B (en) 2019-01-02 2019-01-02 Target detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910001609.3A CN109685062B (en) 2019-01-02 2019-01-02 Target detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN109685062A CN109685062A (en) 2019-04-26
CN109685062B true CN109685062B (en) 2023-07-25

Family

ID=66191539

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910001609.3A Active CN109685062B (en) 2019-01-02 2019-01-02 Target detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN109685062B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110177256B (en) * 2019-06-17 2021-12-14 北京影谱科技股份有限公司 Tracking video data acquisition method and device
CN110458198B (en) * 2019-07-10 2022-03-29 哈尔滨工业大学(深圳) Multi-resolution target identification method and device
CN111770266B (en) * 2020-06-15 2021-04-06 北京世纪瑞尔技术股份有限公司 Intelligent visual perception system
CN112766210A (en) * 2021-01-29 2021-05-07 苏州思萃融合基建技术研究所有限公司 Safety monitoring method and device for building construction and storage medium

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7929016B2 (en) * 2005-06-07 2011-04-19 Panasonic Corporation Monitoring system, monitoring method and camera terminal
KR100834465B1 (en) * 2006-12-21 2008-06-05 주식회사 다누시스 System and method for security using motion detection
TWI492188B (en) * 2008-12-25 2015-07-11 Univ Nat Chiao Tung Method for automatic detection and tracking of multiple targets with multiple cameras and system therefor
US9147260B2 (en) * 2010-12-20 2015-09-29 International Business Machines Corporation Detection and tracking of moving objects
CN106327517B (en) * 2015-06-30 2019-05-28 芋头科技(杭州)有限公司 A kind of target tracker and method for tracking target
CN106249267A (en) * 2016-09-30 2016-12-21 南方科技大学 A kind of target location tracking method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
基于差分法和概率估计方法的运动目标检测;苑泊舟;天津理工大学学报;第28卷(第1期);63-67 *
基于背景建模和帧间差分法的高点监控行人检测;胡亚洲;实验室研究与探索;第37卷(第9期);12-16 *

Also Published As

Publication number Publication date
CN109685062A (en) 2019-04-26

Similar Documents

Publication Publication Date Title
CN109685062B (en) Target detection method, device, equipment and storage medium
US10878584B2 (en) System for tracking object, and camera assembly therefor
CN110866480B (en) Object tracking method and device, storage medium and electronic device
CN109151375B (en) Target object snapshot method and device and video monitoring equipment
CN108921823B (en) Image processing method, image processing device, computer-readable storage medium and electronic equipment
US9185402B2 (en) Traffic camera calibration update utilizing scene analysis
JP5267596B2 (en) Moving body detection device
EP3641298B1 (en) Method and device for capturing target object and video monitoring device
CN110610150B (en) Tracking method, device, computing equipment and medium of target moving object
JP7230507B2 (en) Deposit detection device
JP6809613B2 (en) Image foreground detection device, detection method and electronic equipment
KR102196086B1 (en) Method for autonomous balancing PTZ of camera and system for providing traffic information therewith
CN115760912A (en) Moving object tracking method, device, equipment and computer readable storage medium
KR101236223B1 (en) Method for detecting traffic lane
Goudar et al. Online traffic density estimation and vehicle classification management system
EP3044734B1 (en) Isotropic feature matching
JP5127692B2 (en) Imaging apparatus and tracking method thereof
CN112884805A (en) Cross-scale self-adaptive mapping light field imaging method
CN116342642A (en) Target tracking method, device, electronic equipment and readable storage medium
US8433139B2 (en) Image processing apparatus, image processing method and program for segmentation based on a degree of dispersion of pixels with a same characteristic quality
CN114882003A (en) Method, medium and computing device for detecting shooting pose change of camera
CN115412668A (en) Tracking shooting method and device and computer readable storage medium
CN113810633A (en) Image processing device
JP7250433B2 (en) IMAGING DEVICE, CONTROL METHOD AND PROGRAM
JP2017117038A (en) Road surface estimation device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant