CN116582653B - Intelligent video monitoring method and system based on multi-camera data fusion - Google Patents

Intelligent video monitoring method and system based on multi-camera data fusion Download PDF

Info

Publication number
CN116582653B
CN116582653B CN202310861309.9A CN202310861309A CN116582653B CN 116582653 B CN116582653 B CN 116582653B CN 202310861309 A CN202310861309 A CN 202310861309A CN 116582653 B CN116582653 B CN 116582653B
Authority
CN
China
Prior art keywords
target
information
camera
video
virtual map
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310861309.9A
Other languages
Chinese (zh)
Other versions
CN116582653A (en
Inventor
马学沛
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangdong Tianyima Information Industry Co ltd
Original Assignee
Guangdong Tianyima Information Industry Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangdong Tianyima Information Industry Co ltd filed Critical Guangdong Tianyima Information Industry Co ltd
Priority to CN202310861309.9A priority Critical patent/CN116582653B/en
Publication of CN116582653A publication Critical patent/CN116582653A/en
Application granted granted Critical
Publication of CN116582653B publication Critical patent/CN116582653B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment
    • H04N5/262Studio circuits, e.g. for mixing, switching-over, change of character of image, other special effects ; Cameras specially adapted for the electronic generation of special effects
    • H04N5/265Mixing

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Geometry (AREA)
  • Software Systems (AREA)
  • Remote Sensing (AREA)
  • Computer Graphics (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application provides an intelligent video monitoring method and system based on multi-camera data fusion, comprising the following steps: acquiring target area information; determining position information of a plurality of cameras in a target area based on the target area information; extracting shooting areas of all cameras; establishing a virtual map based on the position information of the camera, and fusing a shooting area with the virtual map; and establishing video windows in the shooting areas of the cameras, and playing video monitoring information of the corresponding cameras. The application establishes the virtual map based on the position information of the cameras, merges the shooting areas with the virtual map, establishes the video window in the shooting areas of the cameras, and is used for playing the video monitoring information of the corresponding cameras, thereby being convenient for users to intuitively know the monitoring areas and the monitoring contents of the videos.

Description

Intelligent video monitoring method and system based on multi-camera data fusion
Technical Field
The application relates to an intelligent video monitoring method and system, in particular to an intelligent video monitoring method and system based on multi-camera data fusion.
Background
With the promotion of camera technology, the camera is gradually popularized, and the use of the camera for monitoring becomes one of important means for people to maintain own safety and rights and interests. However, in areas such as a large store, there are a plurality of cameras, and the distribution areas are different, and the shooting areas are different, so that a user cannot combine with the actual area when monitoring the information of the plurality of cameras, and therefore the content of the cameras is not easy to know.
Disclosure of Invention
The application provides an intelligent video monitoring method and system based on multi-camera data fusion, which are used for facilitating understanding of camera shooting contents of a plurality of cameras.
The application provides an intelligent video monitoring method for multi-camera data fusion, which comprises the following steps:
s1, acquiring target area information and establishing origin coordinates of a target area
Step S2, baseDetermining a plurality of cameras in the target area according to the target area informationCoordinates of->Wherein i is the camera serial number;
s3, extracting shooting areas of all cameras;
s4, calling the position information of the cameras and the shot background images, establishing a virtual map, and fusing the shot background images with the virtual map to construct a three-dimensional model map;
and S5, establishing a video window for the shooting area of each camera, and playing video monitoring information of the corresponding camera.
Further, the step S5 further includes:
step S51, at least one cameraAt least one object is selected from the video of (a)>
Step S52, extracting the targetCamera based in the shooting area +.>Relative coordinates of (2)And fused in a virtual map, the target is determined based on a vector algorithm>Based on the origin coordinates of the region->Geodetic coordinates +.>The method comprises the following steps:
step S53, based on the targetGeodetic coordinates in a virtual map +.>The selected cameras are mobilized +.>For the object->Tracking shooting is carried out; comprising the following steps:
step S531, calling the cameraCoordinates of->According to the object->Geodetic coordinates in a virtual map +.>The camera head is calculated out>Focal coordinates +.>The method comprises the following steps:
step S532, according to the focal coordinatesThe camera is->Focusing and tracking shooting are carried out;
step S533, calling the cameraThe shot background image is used for filtering the background of the tracked shot image;
step S534, according to the user' S setting, the cameras are set upShooting the target->Displayed on a multi-window screen or displayed on a virtual map after stereoscopic modeling.
Still further, the method further comprises:
collecting image information of a target in a video;
extracting a characteristic value based on image information of the target;
and matching characteristic values of video monitoring information shot by a plurality of cameras in the target area, extracting one or more targets to be determined which accord with the characteristic values, and displaying the targets.
Still further, the method further comprises:
acquiring a target to be determined selected by a user;
and displaying video monitoring information of the target to be determined and corresponding camera position information.
Still further, the method further comprises:
responding to a user identification instruction, and converting an object to be determined corresponding to the instruction into a target;
extracting the relative position and the corresponding time of the target in each video monitoring information;
and fusing the relative position and the corresponding time with the virtual map to form a video information data set of the target.
Still further, the method further comprises:
and fusing all videos in the video information data set according to time to form a target video stream.
Further, if the time corresponding to each video in the video information dataset is discontinuous, setting time information between two videos with discontinuous time, and setting the time information as a missing time period;
extracting corresponding time and corresponding positions of the target at starting and stopping moments of the missing time period;
fusing the corresponding time and the corresponding position with the virtual map;
based on the virtual map, the moving track of the target in the missing period is presumed.
The application also discloses an intelligent video monitoring system based on multi-camera data fusion, which comprises:
the information acquisition module is used for acquiring target area information;
the information extraction module is used for determining the position information of the cameras in the target area based on the target area information; extracting shooting areas of all cameras;
the map module is used for establishing a virtual map based on the position information of the camera and fusing the shooting area with the virtual map; a video window is established in the shooting area of each camera and used for playing video monitoring information of the corresponding camera; comprising the following steps:
at least one cameraAt least one object is selected from the video of (a)>
Extracting the targetIn shootingCamera based on in area->Relative coordinates of +.>And fused in a virtual map, the target is determined based on a vector algorithm>Based on the origin coordinates of the region->Geodetic coordinates of (2)The method comprises the following steps:
based on the targetGeodetic coordinates in a virtual map +.>The selected cameras are mobilized +.>For the object->Tracking shooting is carried out; comprising the following steps:
invoking the cameraCoordinates of->According to the object->Ground in virtual mapCoordinates->The camera head is calculated out>Focal coordinates +.>The method comprises the following steps:
according to the focal coordinatesThe camera is->Focusing and tracking shooting are carried out;
invoking the cameraThe shot background image is used for filtering the background of the tracked shot image;
according to the setting of the user, a plurality of cameras are arrangedShooting the target->Displayed on a multi-window screen or displayed on a virtual map after stereoscopic modeling.
Further, the system further comprises: a plurality of camera units.
Further, the system also comprises a database for storing video monitoring information.
Compared with the prior art, the method and the device have the advantages that the virtual map is built based on the position information of the cameras, the shooting areas are fused with the virtual map, and the video window is built in the shooting area of each camera and used for playing the video monitoring information of the corresponding camera, so that a user can intuitively know the monitoring area and the monitoring content of each video conveniently.
Drawings
FIG. 1 is a flow chart of an embodiment of the present application.
Detailed Description
In order that those skilled in the art will better understand the present application, a technical solution of the embodiments of the present application will be clearly and completely described below, and it is apparent that the described embodiments are only some embodiments of the present application, not all embodiments.
The embodiment of the application discloses an intelligent video monitoring method based on multi-camera data fusion, which is shown in fig. 1 and comprises the following steps:
acquiring target area information;
extracting target area information by acquiring a target area defined by a user on a geographic area, wherein the target area information specifically comprises the target area geographic information;
determining position information of a plurality of cameras in a target area based on the target area information;
extracting the position information of all cameras existing in the geographic information of the target area based on the geographic information of the target area;
extracting shooting areas of all cameras;
based on the position information of each camera, combining the orientation of the cameras to establish a shooting area of the cameras;
establishing a virtual map based on the position information of the camera, and fusing a shooting area with the virtual map;
the shooting area and the virtual map are fused to obtain a virtual map, and the map displays a specific shooting area at the camera, so that a user can conveniently know the shooting angle of the camera and the dead angle of the camera;
and establishing video windows in the shooting areas of the cameras, and playing video monitoring information of the corresponding cameras.
The video window is built in the shooting area of the virtual map, and video monitoring information is displayed, so that a user can conveniently know video monitoring content.
According to the embodiment of the application, the virtual map is built based on the position information of the cameras, the shooting areas are fused with the virtual map, and the video window is built in the shooting areas of the cameras and used for playing the video monitoring information of the corresponding cameras, so that a user can intuitively know the monitoring areas and the monitoring contents of the videos.
Optionally, the method further comprises:
selecting at least one target in the video of at least one camera;
extracting the relative position of a target in a video, and fusing the relative position in a virtual map;
and extracting video monitoring information of each camera shooting the target based on the relative position of the target in the virtual map, and displaying the video monitoring information.
Through the process, the embodiment of the application integrates the target tracking process, and can effectively track the position change of the target in each camera.
In particular, it further comprises:
collecting image information of a target in a video;
extracting a characteristic value based on image information of the target;
and matching characteristic values of video monitoring information shot by a plurality of cameras in the target area, extracting one or more targets to be determined which accord with the characteristic values, and displaying the targets.
Through the process, the embodiment of the application can perform feature recognition on the image information of the target, further collect the occurrence condition of the target in different videos based on similar features, and realize the target index function.
In particular, it further comprises:
acquiring a target to be determined selected by a user;
and displaying video monitoring information of the target to be determined and corresponding camera position information.
Through the process, the embodiment of the application displays the shot camera position information and video monitoring information corresponding to the target to be determined, and is convenient for staff to acquire corresponding contents.
In particular, it further comprises:
responding to a user identification instruction, and converting an object to be determined corresponding to the instruction into a target;
extracting the relative position and the corresponding time of the target in each video monitoring information;
and fusing the relative position and the corresponding time with the virtual map to form a video information data set of the target.
According to the embodiment of the application, the target to be determined is converted into the target, the acquisition of the target related video information is realized, the video information data set is established, and the extraction of the target related video information is realized.
In particular, it further comprises:
and fusing all videos in the video information data set according to time to form a target video stream.
Particularly, if the time corresponding to each video in the video information dataset is discontinuous, setting time information between two videos with discontinuous time, and setting the time information as a missing time period;
extracting corresponding time and corresponding positions of the target at starting and stopping moments of the missing time period;
fusing the corresponding time and the corresponding position with the virtual map;
based on the virtual map, the moving track of the target in the missing period is presumed.
Based on the above process, the embodiment of the application can reasonably infer the movement of the target outside the shooting area, so that the method is used for the user to select and track the position of the target conveniently.
On the other hand, the embodiment of the application also discloses an intelligent video monitoring system based on multi-camera data fusion, which comprises the following components:
the information acquisition module is used for acquiring target area information;
the information extraction module is used for determining the position information of the cameras in the target area based on the target area information; extracting shooting areas of all cameras;
the map module is used for establishing a virtual map based on the position information of the camera and fusing the shooting area with the virtual map; a video window is established in the shooting area of each camera and used for playing video monitoring information of the corresponding camera; comprising the following steps:
at least one cameraAt least one object is selected from the video of (a)>
Extracting the targetCamera based in the shooting area +.>Relative coordinates of +.>And fused in a virtual map, the target is determined based on a vector algorithm>Based on the origin coordinates of the region->Geodetic coordinates of (2)The method comprises the following steps:
based on the targetGeodetic coordinates in a virtual map +.>The selected cameras are mobilized +.>For the object->Tracking shooting is carried out; comprising the following steps:
invoking the cameraCoordinates of->According to the object->Geodetic coordinates in a virtual map +.>The camera head is calculated out>Focal coordinates +.>The method comprises the following steps:
according to the focal coordinatesThe camera is->Focusing and tracking shooting are carried out;
invoking the cameraThe shot background image is used for filtering the background of the tracked shot image;
according to the setting of the user, a plurality of cameras are arrangedShooting the target->Displayed on a multi-window screen or displayed on a virtual map after stereoscopic modeling.
The information acquisition module is arranged on the computer equipment.
Optionally, the system further comprises: a plurality of camera units.
The camera unit is in communication connection with the information acquisition module.
Optionally, the system further comprises a database for storing video monitoring information.
The information extraction module, the map module and the database are arranged on the server, and the computer equipment is connected with the server by adopting wireless or wired communication.
The computer device of the embodiment of the application comprises: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the method embodiments described above when the computer program is executed.
The computer device can be a smart phone, a tablet computer, a desktop computer, a cloud server and other computing devices. The computer device may include, but is not limited to, a processor, a memory. It will be appreciated by those skilled in the art that the foregoing is merely an example of a computer device and is not intended to be limiting, and that more or fewer components than shown may be included, or certain components may be combined, or different components may be included, for example, input-output devices, network access devices, etc.
The processor may be a central processing unit (Central Processing Unit, CPU), it may also be other general purpose processors, digital signal processors (Digital Signal Processor, DSP), application specific integrated circuits (Application Specific Integrated Circuit, ASIC), off-the-shelf programmable gate arrays (Field-Programmable Gate Array, FPGA) or other programmable logic devices, discrete gate or transistor logic devices, discrete hardware components, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory may in some embodiments be an internal storage unit of the computer device, such as a hard disk or a memory of the computer device. The memory may in other embodiments also be an external storage device of the computer device, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card) or the like, which are provided on the computer device. Further, the memory may also include both internal storage units and external storage devices of the computer device. The memory is used to store an operating system, application programs, boot loader (BootLoader), data, and other programs, etc., such as program code for the computer program, etc. The memory may also be used to temporarily store data that has been output or is to be output.
In addition, the embodiment of the present application further provides a computer readable storage medium, where a computer program is stored, where the computer program is executed by a processor to implement the steps in any of the above-mentioned method embodiments.
Embodiments of the present application provide a computer program product which, when run on a computer device, causes the computer device to perform the steps of the method embodiments described above.
In several embodiments provided by the present application, it will be understood that each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
The functions, if implemented in the form of software functional modules and sold or used as a stand-alone product, may be stored in a computer-readable storage medium. Based on this understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art or in the form of a software product stored in a storage medium, comprising several instructions for causing a computer device to perform all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: a U-disk, a removable hard disk, a Read-Only Memory (ROM), a random access Memory (RAM, random Access Memory), a magnetic disk, or an optical disk, or other various media capable of storing program codes.
In this specification, each embodiment is described in a progressive manner, and each embodiment is mainly described in a different point from other embodiments, so that the same or similar parts between the embodiments are referred to each other. For the device disclosed in the embodiment, since it corresponds to the method disclosed in the embodiment, the description is relatively simple, and the relevant points refer to the description of the method section.
Those of skill would further appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware, computer software, or combinations of both, and that the various illustrative elements and steps are described above generally in terms of functionality in order to clearly illustrate the interchangeability of hardware and software. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the solution. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
The steps of a method or algorithm described in connection with the embodiments disclosed herein may be embodied directly in hardware, in a software module executed by a processor, or in a combination of the two. The software modules may be disposed in Random Access Memory (RAM), memory, read Only Memory (ROM), electrically programmable ROM, electrically erasable programmable ROM, registers, hard disk, a removable disk, a CD-ROM, or any other form of storage medium known in the art.
The intelligent video monitoring method and the system for multi-camera data fusion provided by the application are described in detail. The principles and embodiments of the present application have been described herein with reference to specific examples, the description of which is intended only to facilitate an understanding of the method of the present application and its core ideas. It should be noted that it will be apparent to those skilled in the art that various modifications and adaptations of the application can be made without departing from the principles of the application and these modifications and adaptations are intended to be within the scope of the application as defined in the following claims.
Finally, it should be noted that the above-mentioned embodiments are only for illustrating the technical solution of the present application and not for limiting the same, and although the present application has been described in detail with reference to the above-mentioned embodiments, it should be understood by those skilled in the art that modifications and equivalents may be made to the specific embodiments of the present application after reading the present specification, and these modifications and variations do not depart from the scope of the application as claimed in the pending claims.

Claims (7)

1. An intelligent video monitoring method based on multi-camera data fusion is characterized by comprising the following steps:
s1, acquiring target area information and establishing origin coordinates of a target area
Step S2, determining a plurality of cameras in the target area based on the target area informationCoordinates of->Wherein i is the camera serial number;
s3, extracting shooting areas of all cameras;
s4, calling the position information of the cameras and the shot background images, establishing a virtual map, and fusing the shot background images with the virtual map to construct a three-dimensional model map;
step S5, establishing video windows for shooting areas of the cameras, wherein the video windows are used for playing video monitoring information of the corresponding cameras; comprising the following steps:
step S51, at least one cameraAt least one object is selected from the video of (a)>
Step S52, extracting the targetCamera based in the shooting area +.>Relative coordinates of +.>And fused in a virtual map, the target is determined based on a vector algorithm>Based on the origin coordinates of the region->Geodetic coordinates +.>The method comprises the following steps:
step S53, based on the targetGeodetic coordinates in a virtual map +.>The selected cameras are mobilized +.>For the object->Tracking shooting is carried out; comprising the following steps:
step S531, calling the cameraCoordinates of->According to the object->Geodetic coordinates in a virtual map +.>The camera head is calculated out>Focal coordinates +.>The method comprises the following steps:
step S532, according to the focal coordinatesThe camera is->Focusing and tracking shooting are carried out;
step S533, calling the cameraThe shot background image is used for filtering the background of the tracked shot image;
step S534, according to the user' S setting, the cameras are set upShooting the target->Display on the multi-window screen or display on the virtual map after three-dimensional modeling;
extracting the relative position and the corresponding time of the target in each video monitoring information; fusing the target and the virtual map based on the relative position and the corresponding time to form a video information data set of the target; fusing all videos in the video information data set according to the time corresponding to each video to form a video stream of a target; if the time corresponding to each video in the video information dataset is discontinuous, setting time information between two videos with discontinuous time, and setting the time information as a missing time period;
extracting the relative positions and the corresponding times of the target at the starting and stopping moments of the missing time period;
fusing the target and the virtual map based on the relative position and the corresponding time;
based on the virtual map, the moving track of the target in the missing period is presumed.
2. The intelligent video monitoring method based on multi-camera data fusion according to claim 1, further comprising:
collecting image information of a target in a video;
extracting a characteristic value based on image information of the target;
and matching characteristic values of video monitoring information shot by a plurality of cameras in the target area, extracting one or more targets to be determined which accord with the characteristic values, and displaying the targets.
3. The intelligent video monitoring method based on multi-camera data fusion according to claim 2, further comprising:
acquiring a target to be determined selected by a user;
and displaying video monitoring information of the target to be determined and corresponding camera position information.
4. The intelligent video monitoring method based on multi-camera data fusion according to claim 3, further comprising: and responding to the identification instruction of the user, and converting the target to be determined corresponding to the instruction into a target.
5. An intelligent video monitoring system based on multi-camera data fusion, for implementing the intelligent video monitoring method based on multi-camera data fusion as set forth in any one of claims 1 to 4, comprising: the information acquisition module is used for acquiring target area information;
the information extraction module is used for determining the position information of the cameras in the target area based on the target area information; extracting shooting areas of all cameras;
the map module is used for establishing a virtual map based on the position information of the camera and fusing the shooting area with the virtual map; a video window is established in the shooting area of each camera and used for playing video monitoring information of the corresponding camera; comprising the following steps:
at least one cameraAt least one object is selected from the video of (a)>
Extracting the targetCamera based in the shooting area +.>Relative coordinates of +.>And fused in a virtual map, the target is determined based on a vector algorithm>Based on the origin coordinates of the region->Geodetic coordinates of (2)The method comprises the following steps:
based on the targetGeodetic coordinates in a virtual map +.>Mobilize selected ones of the camerasFor the object->Tracking shooting is carried out; comprising the following steps:
invoking the cameraCoordinates of->According to the object->Geodetic coordinates in virtual mapThe camera head is calculated out>Focal coordinates +.>The method comprises the following steps:
according to the focal coordinatesThe camera is->Focusing and tracking shooting are carried out;
invoking the cameraThe shot background image is used for filtering the background of the tracked shot image;
according to the setting of the user, a plurality of cameras are arrangedShooting the target->Display on the multi-window screen or display on the virtual map after three-dimensional modeling;
extracting the relative position and the corresponding time of the target in each video monitoring information; fusing the target and the virtual map based on the relative position and the corresponding time to form a video information data set of the target; fusing all videos in the video information data set according to the time corresponding to each video to form a video stream of a target; if the time corresponding to each video in the video information dataset is discontinuous, setting time information between two videos with discontinuous time, and setting the time information as a missing time period;
extracting the relative positions and the corresponding times of the target at the starting and stopping moments of the missing time period;
fusing the target and the virtual map based on the relative position and the corresponding time;
based on the virtual map, the moving track of the target in the missing period is presumed.
6. The intelligent video monitoring system based on multi-camera data fusion of claim 5, further comprising: a plurality of camera units.
7. The intelligent video monitoring system based on multi-camera data fusion of claim 6, further comprising a database for storing video monitoring information.
CN202310861309.9A 2023-07-14 2023-07-14 Intelligent video monitoring method and system based on multi-camera data fusion Active CN116582653B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310861309.9A CN116582653B (en) 2023-07-14 2023-07-14 Intelligent video monitoring method and system based on multi-camera data fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310861309.9A CN116582653B (en) 2023-07-14 2023-07-14 Intelligent video monitoring method and system based on multi-camera data fusion

Publications (2)

Publication Number Publication Date
CN116582653A CN116582653A (en) 2023-08-11
CN116582653B true CN116582653B (en) 2023-10-27

Family

ID=87543572

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310861309.9A Active CN116582653B (en) 2023-07-14 2023-07-14 Intelligent video monitoring method and system based on multi-camera data fusion

Country Status (1)

Country Link
CN (1) CN116582653B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116866534B (en) * 2023-09-05 2023-11-28 南京隆精微电子技术有限公司 Processing method and device for digital video monitoring system
CN117596367A (en) * 2024-01-19 2024-02-23 安徽协创物联网技术有限公司 Low-power-consumption video monitoring camera and control method thereof

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595103A (en) * 2012-03-07 2012-07-18 深圳市信义科技有限公司 Method based on geographic information system (GIS) map deduction intelligent video
CN103248867A (en) * 2012-08-20 2013-08-14 苏州大学 Surveillance method of intelligent video surveillance system based on multi-camera data fusion
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN110072087A (en) * 2019-05-07 2019-07-30 高新兴科技集团股份有限公司 Video camera interlock method, device, equipment and storage medium based on 3D map
CN110278413A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110930507A (en) * 2019-10-24 2020-03-27 北京智汇云舟科技有限公司 Large-scene cross-border target tracking method and system based on three-dimensional geographic information
CN112383746A (en) * 2020-10-29 2021-02-19 北京软通智慧城市科技有限公司 Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium
CN114821430A (en) * 2022-05-05 2022-07-29 西安未来国际信息股份有限公司 Cross-camera target object tracking method, device, equipment and storage medium

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102595103A (en) * 2012-03-07 2012-07-18 深圳市信义科技有限公司 Method based on geographic information system (GIS) map deduction intelligent video
CN103248867A (en) * 2012-08-20 2013-08-14 苏州大学 Surveillance method of intelligent video surveillance system based on multi-camera data fusion
CN105245850A (en) * 2015-10-27 2016-01-13 太原市公安局 Method, device and system for tracking target across surveillance cameras
CN110072087A (en) * 2019-05-07 2019-07-30 高新兴科技集团股份有限公司 Video camera interlock method, device, equipment and storage medium based on 3D map
CN110278413A (en) * 2019-06-28 2019-09-24 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110930507A (en) * 2019-10-24 2020-03-27 北京智汇云舟科技有限公司 Large-scene cross-border target tracking method and system based on three-dimensional geographic information
CN112383746A (en) * 2020-10-29 2021-02-19 北京软通智慧城市科技有限公司 Video monitoring method and device in three-dimensional scene, electronic equipment and storage medium
CN114821430A (en) * 2022-05-05 2022-07-29 西安未来国际信息股份有限公司 Cross-camera target object tracking method, device, equipment and storage medium

Also Published As

Publication number Publication date
CN116582653A (en) 2023-08-11

Similar Documents

Publication Publication Date Title
CN116582653B (en) Intelligent video monitoring method and system based on multi-camera data fusion
CN108615248B (en) Method, device and equipment for relocating camera attitude tracking process and storage medium
CN106650662B (en) Target object shielding detection method and device
CN108256404B (en) Pedestrian detection method and device
CN106296570B (en) Image processing method and device
CN111259846B (en) Text positioning method and system and text positioning model training method and system
KR101165415B1 (en) Method for recognizing human face and recognizing apparatus
WO2017054442A1 (en) Image information recognition processing method and device, and computer storage medium
WO2021136386A1 (en) Data processing method, terminal, and server
US9384400B2 (en) Method and apparatus for identifying salient events by analyzing salient video segments identified by sensor information
CN112215037B (en) Object tracking method and device, electronic equipment and computer readable storage medium
US20140126819A1 (en) Region of Interest Based Image Registration
CN110675426B (en) Human body tracking method, device, equipment and storage medium
CN111914775A (en) Living body detection method, living body detection device, electronic apparatus, and storage medium
CN113436338A (en) Three-dimensional reconstruction method and device for fire scene, server and readable storage medium
US10255512B2 (en) Method, system and apparatus for processing an image
CN114943773A (en) Camera calibration method, device, equipment and storage medium
CN108229281B (en) Neural network generation method, face detection device and electronic equipment
CN109981989B (en) Method and device for rendering image, electronic equipment and computer readable storage medium
CN107480580B (en) Image recognition method and image recognition device
CN113225451B (en) Image processing method and device and electronic equipment
CN116912517B (en) Method and device for detecting camera view field boundary
CN111753766A (en) Image processing method, device, equipment and medium
US20230048952A1 (en) Image registration method and electronic device
CN111194015A (en) Outdoor positioning method and device based on building and mobile equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant