CN117333805A - Camera blind spot area analysis method, terminal equipment and storage medium - Google Patents

Camera blind spot area analysis method, terminal equipment and storage medium Download PDF

Info

Publication number
CN117333805A
CN117333805A CN202311003758.6A CN202311003758A CN117333805A CN 117333805 A CN117333805 A CN 117333805A CN 202311003758 A CN202311003758 A CN 202311003758A CN 117333805 A CN117333805 A CN 117333805A
Authority
CN
China
Prior art keywords
area
blind spot
target
camera
potential
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311003758.6A
Other languages
Chinese (zh)
Inventor
洪诗山
俞文勇
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ropt Technology Group Co ltd
Original Assignee
Ropt Technology Group Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ropt Technology Group Co ltd filed Critical Ropt Technology Group Co ltd
Priority to CN202311003758.6A priority Critical patent/CN117333805A/en
Publication of CN117333805A publication Critical patent/CN117333805A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention relates to a camera blind spot area analysis method, terminal equipment and a storage medium, wherein the method comprises the following steps: collecting images shot by each camera in the blind spot analysis area, and determining the effective detection area of each camera; extracting each target in each image; after each target is associated with a corresponding shooting position and shooting time, position information of each target on a time sequence is obtained; track analysis is carried out on the position information of each target on the time sequence by combining the road information, so that a moving path of each target on the time sequence is obtained; after each moving path is segmented, judging whether the segmented sub-paths are in an effective detection area, and setting a road area corresponding to the sub-paths which are not in the effective detection area as a potential blind spot area. The method and the device can more reasonably and accurately analyze the position of the potential blind spot area, further improve the coverage rate and the monitoring capability of the monitoring system, and effectively reduce potential safety hazards caused by the blind spots.

Description

Camera blind spot area analysis method, terminal equipment and storage medium
Technical Field
The present invention relates to the field of deployment of monitoring cameras, and in particular, to a method for analyzing blind spot areas of a camera, a terminal device, and a storage medium.
Background
With the development of cities and the increase of safety requirements, the deployment of monitoring cameras becomes an important means for guaranteeing public safety. However, in a large-scale surveillance camera network, the actual position of the camera often deviates from its recorded longitude and latitude by some degree for various reasons during installation and maintenance. Such deviations can lead to the existence of monitoring blind spots, i.e. the camera does not completely cover the target area, thereby affecting the effectiveness of the monitoring. The existing monitoring blind spot detection method has the problems that the accuracy of GPS positioning is dependent, dynamic changes of the activities of people and vehicles cannot be processed, and the like.
Disclosure of Invention
In order to solve the problems, the invention provides a camera blind spot area analysis method, a terminal device and a storage medium.
The specific scheme is as follows:
a method for analyzing blind spot areas of a camera comprises the following steps:
s101: collecting images shot by each camera in the blind spot analysis area, and determining the effective detection area of each camera;
s102: extracting each target in each image;
s103: after each target is associated with a corresponding shooting position and shooting time, position information of each target on a time sequence is obtained;
s104: track analysis is carried out on the position information of each target on the time sequence by combining the road information, so that a moving path of each target on the time sequence is obtained;
s105: after each moving path is segmented, judging whether the segmented sub-paths are in an effective detection area, and setting a road area corresponding to the sub-paths which are not in the effective detection area as a potential blind spot area.
Further, the method further comprises the following steps: and generating blind spot distribution recommendation results on the map in a visual mode according to the potential blind spot areas.
Further, the method further comprises the following steps: and calculating the weight of each potential blind spot region by combining the size of the weight judgment index corresponding to each potential blind spot region and the weight factor, and evaluating the possibility of blind spots in all the potential blind spot regions according to the weight of each potential blind spot region.
Further, the method further comprises the following steps: the weight determination index includes an excessive one of a number of historical illegal actions, a number of WiFi hotspots, and a number of vehicle violation snapshots.
Further, the method further comprises the following steps: and generating a blind spot distribution recommendation result on the map in a visual mode according to the blind spot existence possibility evaluation result of the potential blind spot area.
A method for analyzing blind spot areas of a camera comprises the following steps:
s201: collecting images shot by each camera in the blind spot analysis area;
s202: extracting each target in each image;
s203: after each target is associated with a corresponding shooting position and shooting time, position information of each target on a time sequence is obtained;
s204: the following potential blind spot area judgment is performed in combination with a public area or a residential area in the map:
if the target appears in the residential area at the next moment after the public area, judging the residential exit area as a potential blind spot area;
if the target appears in the public area at the next moment after the residential area and does not appear in the residential area, judging the residential entrance area as a potential blind spot area;
if the target appears in the public area for a single time, judging that the public area outlet or inlet area is a potential blind spot area;
if the target is present in the residential area a single time, the residential exit or entrance area is determined to be a potential blind spot area.
The terminal equipment for analyzing the blind spot area of the camera comprises a processor, a memory and a computer program which is stored in the memory and can run on the processor, wherein the steps of the method according to the embodiment of the invention are realized when the processor executes the computer program.
A computer readable storage medium storing a computer program which, when executed by a processor, implements the steps of the method described above for embodiments of the present invention.
By adopting the technical scheme, the position of the potential blind spot area can be more reasonably and accurately analyzed, the coverage rate and monitoring capacity of the monitoring system are further improved, and potential safety hazards caused by the blind spots are effectively reduced.
Drawings
Fig. 1 is a flowchart of a first embodiment of the present invention.
Fig. 2 is a flowchart of a second embodiment of the present invention.
Detailed Description
For further illustration of the various embodiments, the invention is provided with the accompanying drawings. The accompanying drawings, which are incorporated in and constitute a part of this disclosure, illustrate embodiments and together with the description, serve to explain the principles of the embodiments. With reference to these matters, one of ordinary skill in the art will understand other possible embodiments and advantages of the present invention.
The invention will now be further described with reference to the drawings and detailed description.
Embodiment one:
the embodiment of the invention provides a method for analyzing blind spot areas of a camera, as shown in fig. 1, comprising the following steps:
s101: and acquiring images shot by each camera in the blind spot analysis area, and determining the effective detection area of each camera.
The camera in this embodiment should be a camera in a surveillance camera network. The blind spot analysis area is a certain area where blind spot analysis is required. The images captured by the cameras should be images within the same time period to enable analysis of the target trajectory from the images captured by the different cameras.
The effective surveillance area of the camera may be determined based on the mounting location and field of view of the camera.
S102: each object in each image is extracted.
The target in the image may be a face target or a vehicle target, and in this embodiment, the face target is taken as an example for analysis. The extraction of the targets can be obtained by adopting the existing computer vision technology for analysis, and each target in the image is obtained by identifying the characteristics of the target. The face features are as follows: facial key points, facial expressions, age gender, etc.; the vehicle features are as follows: license plate number, vehicle type, color, etc.
S103: and after each target is associated with the corresponding shooting position and shooting time, obtaining the position information of each target on the time sequence.
The shooting position corresponding to the target is the installation position of the image corresponding to the shooting camera containing the target, and the shooting time is the shooting time of the image containing the target. After the time is ordered from small to large, the position information (i.e. shooting position) of each target on the time sequence can be obtained.
S104: and carrying out track analysis on the position information of each target on the time sequence by combining the road information to obtain the moving path of each target on the time sequence.
Because the target is required to move along the road information when moving, the movement track of the target between two adjacent position information can be obtained after combining the road information.
S105: after each moving path is segmented, judging whether the segmented sub-paths are in an effective detection area, and setting a road area corresponding to the sub-paths which are not in the effective detection area as a potential blind spot area.
The length of the segment can be set by the person skilled in the art, and the shorter the length of the segment is, the higher the accuracy of the subsequent judgment is.
The sub-path is in the effective detection area, which indicates that the sub-path can be shot by other cameras and does not belong to the blind spot area. If the sub-path is not within the effective detection area, it is stated that the sub-path may not be captured by another camera and is therefore set as a potential (possible) blind spot area.
Further, in order to make the analysis result of the blind spot area more conform to the user requirement, the embodiment further includes calculating the weight of each potential blind spot area by combining the size of the weight judgment index corresponding to each potential blind spot area and the weight factor, and evaluating the possibility of existence of blind spots in all the potential blind spot areas according to the weight of each potential blind spot area.
The weight judgment indexes include the number of historical illegal behaviors, the number of WiFi hot spots, the number of vehicle violation snapshots and the like. The more the number of the historical illegal behaviors and the more the number of the vehicle illegal snapshots are, the more the area needs to be monitored with emphasis, so the weight factor is larger; the greater the number of WiFi hotspots, the denser the traffic in the area, and thus the greater the weighting factor. In other embodiments, those skilled in the art may also use increasing or decreasing the weight determination index according to their own needs, which is not limited herein. The weighted average mode of the weight factors and the index size can obtain the weight values (namely importance) of different potential blind spot areas, the weight values can represent the possibility of existence of the blind spots of the potential blind spot areas, and the larger the weight values, the larger the possibility of existence of the blind spots.
In order to facilitate the user to view the potential blind spot area, the embodiment further includes: and generating a blind spot distribution recommendation result on the map in a visual mode according to the potential blind spot area, namely marking the potential blind spot area on the map. Since the possibility of existence of the blind spots in each potential blind spot area is also evaluated in this example, all the potential blind spot areas can be ranked (for example, the greater the possibility is, the darker the color is, the lesser the possibility is, the lighter the color is) by combining the evaluation results when labeling, so that the user can pay attention to the areas with higher risks preferentially. Through visual display mode, can make the user can intuitively know blind spot distribution and recommended area, the user can carry out adjustment and optimization of camera mounted position according to the recommendation result to improve control coverage rate and security.
The embodiment of the invention has the following beneficial effects:
(1) Coverage area of the monitoring system is improved: by accurately identifying blind spots present in the monitoring system, the invention can recommend areas where camera positions need to be increased or adjusted. By adding the camera at the blind spot position, the coverage area of the monitoring system is expanded, and the monitoring capability of the key area is improved.
(2) The safety and the reliability of the monitoring system are improved: by solving the problems existing in the deployment of the monitoring camera, such as the position difference caused by human input errors or camera displacement, the invention can ensure that the position information of the camera accords with the actual position. Therefore, the monitoring blind spots can be reduced, the safety and reliability of the monitoring system are improved, and the risk of security holes is reduced.
(3) Intelligent blind spot analysis and recommendation are realized: through an algorithm model and a data mining technology, the potential blind spot positions can be accurately identified, targeted camera deployment suggestions are given, and the efficiency of the monitoring system is improved.
(4) Optimizing resource utilization and cost effectiveness: by reasonably deploying the monitoring cameras, repeated coverage and resource waste are avoided, resource utilization can be optimized, and cost effectiveness of the monitoring system is improved. The proposed blind spot location allows for comprehensive analysis of various factors, with the aim of providing a comprehensive and efficient monitoring coverage to the greatest extent, while conserving resources and reducing deployment costs.
Embodiment two:
the embodiment of the invention provides a method for analyzing blind spot areas of a camera, which adopts a target loss judgment mode to judge potential blind spot areas, as shown in fig. 2, and comprises the following steps:
s201: and acquiring images shot by each camera in the blind spot analysis area.
S202: each object in each image is extracted.
S203: and after each target is associated with the corresponding shooting position and shooting time, obtaining the position information of each target on the time sequence.
S204: the following potential blind spot area judgment is performed in combination with a public area or a residential area in the map:
if the target appears in the residential area at the next moment after the public area, judging the residential exit area as a potential blind spot area;
if the target appears in the public area at the next moment after the residential area and does not appear in the residential area, judging the residential entrance area as a potential blind spot area;
if the target appears in the public area for a single time, judging that the public area outlet or inlet area is a potential blind spot area;
if the target is present in the residential area a single time, the residential exit or entrance area is determined to be a potential blind spot area.
This embodiment may be used alone in determining potential blind spot areas for a particular area (residential or public), or may be used in combination with the method of embodiment one.
Embodiment III:
the invention also provides a terminal device for analyzing the blind spot area of the camera, which comprises a memory, a processor and a computer program stored in the memory and capable of running on the processor, wherein the steps in the method embodiment of the first embodiment of the invention are realized when the processor executes the computer program.
Further, as an executable scheme, the camera blind spot area analysis terminal device may be a computing device such as a desktop computer, a notebook computer, a palm computer, and a cloud server. The camera blind spot area analysis terminal device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the above-described composition structure of the camera blind spot area analysis terminal device is merely an example of the camera blind spot area analysis terminal device, and does not constitute limitation of the camera blind spot area analysis terminal device, and may include more or fewer components than the above, or may combine some components, or different components, for example, the camera blind spot area analysis terminal device may further include an input/output device, a network access device, a bus, and the like, which is not limited by the embodiment of the present invention.
Further, as an executable scheme, the processor may be a central processing unit (Central Processing Unit, CPU), other general purpose processor, digital signal processor (Digital Signal Processor, DSP), application specific integrated circuit (Application Specific Integrated Circuit, ASIC), field programmable gate array (Field-Programmable Gate Array, FPGA) or other programmable logic device, etc. The general processor may be a microprocessor or the processor may be any conventional processor or the like, and the processor is a control center of the camera blind spot area analysis terminal device, and various interfaces and lines are used to connect various parts of the whole camera blind spot area analysis terminal device.
The memory may be used to store the computer program and/or module, and the processor may implement various functions of the camera blind spot area analysis terminal device by running or executing the computer program and/or module stored in the memory and invoking data stored in the memory. The memory may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, at least one application program required for a function; the storage data area may store data created according to the use of the cellular phone, etc. In addition, the memory may include high-speed random access memory, and may also include non-volatile memory, such as a hard disk, memory, plug-in hard disk, smart Media Card (SMC), secure Digital (SD) Card, flash Card (Flash Card), at least one disk storage device, flash memory device, or other volatile solid-state storage device.
The present invention also provides a computer readable storage medium storing a computer program which when executed by a processor implements the steps of the above-described method of an embodiment of the present invention.
The modules/units integrated in the camera blind spot area analysis terminal device may be stored in a computer readable storage medium if implemented in the form of software functional units and sold or used as separate products. Based on such understanding, the present invention may implement all or part of the flow of the method of the above embodiment, or may be implemented by a computer program to instruct related hardware, where the computer program may be stored in a computer readable storage medium, and when the computer program is executed by a processor, the computer program may implement the steps of each of the method embodiments described above. Wherein the computer program comprises computer program code which may be in source code form, object code form, executable file or some intermediate form etc. The computer readable medium may include: any entity or device capable of carrying the computer program code, a recording medium, a U disk, a removable hard disk, a magnetic disk, an optical disk, a computer Memory, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a software distribution medium, and so forth.
While the invention has been particularly shown and described with reference to a preferred embodiment, it will be understood by those skilled in the art that various changes in form and details may be made therein without departing from the spirit and scope of the invention as defined by the appended claims.

Claims (8)

1. The method for analyzing the blind spot area of the camera is characterized by comprising the following steps of:
s101: collecting images shot by each camera in the blind spot analysis area, and determining the effective detection area of each camera;
s102: extracting each target in each image;
s103: after each target is associated with a corresponding shooting position and shooting time, position information of each target on a time sequence is obtained;
s104: track analysis is carried out on the position information of each target on the time sequence by combining the road information, so that a moving path of each target on the time sequence is obtained;
s105: after each moving path is segmented, judging whether the segmented sub-paths are in an effective detection area, and setting a road area corresponding to the sub-paths which are not in the effective detection area as a potential blind spot area.
2. The camera blind spot area analysis method according to claim 1, wherein: further comprises: and generating blind spot distribution recommendation results on the map in a visual mode according to the potential blind spot areas.
3. The camera blind spot area analysis method according to claim 1, wherein: further comprises: and calculating the weight of each potential blind spot region by combining the size of the weight judgment index corresponding to each potential blind spot region and the weight factor, and evaluating the possibility of blind spots in all the potential blind spot regions according to the weight of each potential blind spot region.
4. A camera blind spot area analysis method according to claim 3, wherein: further comprises: the weight determination index includes an excessive one of a number of historical illegal actions, a number of WiFi hotspots, and a number of vehicle violation snapshots.
5. A camera blind spot area analysis method according to claim 3, wherein: further comprises: and generating a blind spot distribution recommendation result on the map in a visual mode according to the blind spot existence possibility evaluation result of the potential blind spot area.
6. The method for analyzing the blind spot area of the camera is characterized by comprising the following steps of:
s201: collecting images shot by each camera in the blind spot analysis area;
s202: extracting each target in each image;
s203: after each target is associated with a corresponding shooting position and shooting time, position information of each target on a time sequence is obtained;
s204: the following potential blind spot area judgment is performed in combination with a public area or a residential area in the map:
if the target appears in the residential area at the next moment after the public area, judging the residential exit area as a potential blind spot area;
if the target appears in the public area at the next moment after the residential area and does not appear in the residential area, judging the residential entrance area as a potential blind spot area;
if the target appears in the public area for a single time, judging that the public area outlet or inlet area is a potential blind spot area;
if the target is present in the residential area a single time, the residential exit or entrance area is determined to be a potential blind spot area.
7. The blind spot area analysis terminal device of the video camera is characterized in that: comprising a processor, a memory and a computer program stored in the memory and running on the processor, which processor, when executing the computer program, carries out the steps of the method according to any one of claims 1 to 6.
8. A computer-readable storage medium storing a computer program, characterized in that: the computer program, when executed by a processor, implements the steps of the method according to any one of claims 1 to 6.
CN202311003758.6A 2023-08-10 2023-08-10 Camera blind spot area analysis method, terminal equipment and storage medium Pending CN117333805A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311003758.6A CN117333805A (en) 2023-08-10 2023-08-10 Camera blind spot area analysis method, terminal equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311003758.6A CN117333805A (en) 2023-08-10 2023-08-10 Camera blind spot area analysis method, terminal equipment and storage medium

Publications (1)

Publication Number Publication Date
CN117333805A true CN117333805A (en) 2024-01-02

Family

ID=89292172

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311003758.6A Pending CN117333805A (en) 2023-08-10 2023-08-10 Camera blind spot area analysis method, terminal equipment and storage medium

Country Status (1)

Country Link
CN (1) CN117333805A (en)

Similar Documents

Publication Publication Date Title
US9583000B2 (en) Vehicle-based abnormal travel event detecting and reporting
US9336450B2 (en) Methods and systems for selecting target vehicles for occupancy detection
JP6954420B2 (en) Information processing equipment, information processing methods, and programs
CN109740573B (en) Video analysis method, device, equipment and server
WO2021093625A1 (en) Intelligent analysis algorithm selection method, apparatus and system, and electronic device
CN111477007A (en) Vehicle checking, controlling, analyzing and managing system and method
KR102122850B1 (en) Solution for analysis road and recognition vehicle license plate employing deep-learning
CN109615904A (en) Parking management method, device, computer equipment and storage medium
CN111507278A (en) Method and device for detecting roadblock and computer equipment
CN111767432B (en) Co-occurrence object searching method and device
CN111476685B (en) Behavior analysis method, device and equipment
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN111353342B (en) Shoulder recognition model training method and device, and people counting method and device
CN112863195B (en) Vehicle state determination method and device
CN117333805A (en) Camera blind spot area analysis method, terminal equipment and storage medium
CN115861916A (en) Abnormal parking behavior detection method and device, computer equipment and storage medium
CN114511825A (en) Method, device and equipment for detecting area occupation and storage medium
Dursa et al. Developing traffic congestion detection model using deep learning approach: a case study of Addis Ababa city road
CN112885106A (en) Vehicle big data-based regional prohibition detection system and method and storage medium
CN111243289A (en) Target vehicle tracking method and device, storage medium and electronic device
CN114926973B (en) Video monitoring method, device, system, server and readable storage medium
US20240163402A1 (en) System, apparatus, and method of surveillance
Singh et al. Evaluating the Performance of Ensembled YOLOv8 Variants in Smart Parking Applications for Vehicle Detection and License Plate Recognition under Varying Lighting Conditions
CN117523814A (en) Monitoring area determining method and device, electronic equipment and storage medium
Qi et al. Integrated smart public parking system for Malaysia

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination