CN112364950A - Event positioning method and system based on three-dimensional geographic information scene - Google Patents

Event positioning method and system based on three-dimensional geographic information scene Download PDF

Info

Publication number
CN112364950A
CN112364950A CN202011065487.3A CN202011065487A CN112364950A CN 112364950 A CN112364950 A CN 112364950A CN 202011065487 A CN202011065487 A CN 202011065487A CN 112364950 A CN112364950 A CN 112364950A
Authority
CN
China
Prior art keywords
dimensional
dimensional geographic
video
event
coordinate
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011065487.3A
Other languages
Chinese (zh)
Inventor
刘丽娟
刘卫华
陈虹旭
周舟
李晓明
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Smart Yunzhou Technology Co ltd
Original Assignee
Beijing Smart Yunzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Smart Yunzhou Technology Co ltd filed Critical Beijing Smart Yunzhou Technology Co ltd
Priority to CN202011065487.3A priority Critical patent/CN112364950A/en
Publication of CN112364950A publication Critical patent/CN112364950A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06KGRAPHICAL DATA READING; PRESENTATION OF DATA; RECORD CARRIERS; HANDLING RECORD CARRIERS
    • G06K17/00Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations
    • G06K17/0022Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device
    • G06K17/0029Methods or arrangements for effecting co-operative working between equipments covered by two or more of main groups G06K1/00 - G06K15/00, e.g. automatic card files incorporating conveying and reading operations arrangements or provisious for transferring data to distant stations, e.g. from a sensing device the arrangement being specially adapted for wireless interrogation of grouped or bundled articles tagged with wireless record carriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/05Geographic models

Abstract

The embodiment of the application discloses an event positioning method and system based on a three-dimensional geographic information scene, wherein the method comprises the following steps: the video intelligent analysis module analyzes the characteristics of the monitored pixel pictures, and if an abnormal event occurs in the current video segment, the abnormal event and the three-dimensional geographic position coordinate corresponding to the abnormal event are sent to an event three-dimensional visualization module; the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event. The mobile positioning number based on personnel information is fused with the video intelligent analysis data, so that the visual display of a three-dimensional video fusion scene is realized, and the control on the emergency in a specific place can be realized.

Description

Event positioning method and system based on three-dimensional geographic information scene
Technical Field
The embodiment of the application relates to the technical field of virtual reality, in particular to an event positioning method and system based on a three-dimensional geographic information scene.
Background
The public security supervision place refers to the places such as a guard post, a detention post, a forced isolation drug rehabilitation post, a forced medical post, a housing education post, a supervision hospital and the like which are administered by a public security organization, is an important law enforcement department of the public security organization, bears the important roles of carrying out supervision, education, treatment and correction on the supervised person by law, and is responsible for ensuring that criminal action and administrative law enforcement activities are smoothly carried out and the legal rights and interests of the supervised person are guaranteed.
How to monitor the concerned personnel and quickly locate the direction thereof is an urgent problem to be solved.
Disclosure of Invention
Therefore, the embodiment of the application provides an event positioning method and system based on a three-dimensional geographic information scene, and the mobile positioning number based on personnel information is fused with video intelligent analysis data, so that the visual display of the three-dimensional video fusion scene is realized, and the control of an emergency in a specific place can be realized.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
according to a first aspect of embodiments of the present application, there is provided an event positioning method based on a three-dimensional geographic information scenario, the method including:
the video intelligent analysis module analyzes the characteristics of a monitored pixel picture, and if an abnormal event occurs in a current video segment, the abnormal event and a three-dimensional geographic position coordinate corresponding to the abnormal event are sent to an event three-dimensional visualization module, wherein the three-dimensional geographic information corresponding to the abnormal event is obtained by matching and converting the related information of the abnormal event and the three-dimensional geographic coordinate;
the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event.
Optionally, the real-time dynamic coordinates of the mobile coordinate conversion module are determined according to the following steps:
acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area;
and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
Optionally, the three-dimensional video fusion scene of the video stitching fusion module is determined according to the following steps:
collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment;
and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
Optionally, the related information of the abnormal event includes an event type and a pixel position.
According to a second aspect of the embodiments of the present application, there is provided an event positioning system based on a three-dimensional geographic information scene, the system including:
the video intelligent analysis module is used for analyzing the pixel picture characteristics in monitoring, and if an abnormal event occurs in the current video segment, the abnormal event and the three-dimensional geographic position coordinate corresponding to the abnormal event are sent to the event three-dimensional visualization module, and the three-dimensional geographic information corresponding to the abnormal event is obtained by matching and converting the related information of the abnormal event and the three-dimensional geographic coordinate;
the event three-dimensional visualization module is used for acquiring the real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position corresponding to the abnormal event and determining a candidate corresponding to the three-dimensional geographic position area; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event.
Optionally, the mobile coordinate conversion module is specifically configured to:
acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area;
and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
Optionally, the video stitching fusion module is specifically configured to:
collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment;
and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
Optionally, the related information of the abnormal event includes an event type and a pixel position.
According to a third aspect of embodiments herein, there is provided an apparatus comprising: the device comprises a data acquisition device, a processor and a memory; the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method of any of the first aspect.
According to a fourth aspect of embodiments herein, there is provided a computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of the first aspects.
In summary, the embodiment of the present application provides an event positioning method and system based on a three-dimensional geographic information scene, where a video intelligent analysis module is used to analyze pixel picture characteristics in monitoring, and if an abnormal event occurs in a current video segment, the abnormal event and a three-dimensional geographic position coordinate corresponding to the abnormal event are sent to an event three-dimensional visualization module, where the three-dimensional geographic information corresponding to the abnormal event is obtained by performing matching conversion on related information of the abnormal event and the three-dimensional geographic coordinate; the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event. The mobile positioning number based on personnel information is fused with the video intelligent analysis data, so that the visual display of a three-dimensional video fusion scene is realized, and the control on the emergency in a specific place can be realized.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below. It should be apparent that the drawings in the following description are merely exemplary, and that other embodiments can be derived from the drawings provided by those of ordinary skill in the art without inventive effort.
The structures, ratios, sizes, and the like shown in the present specification are only used for matching with the contents disclosed in the specification, so that those skilled in the art can understand and read the present invention, and do not limit the conditions for implementing the present invention, so that the present invention has no technical significance, and any structural modifications, changes in the ratio relationship, or adjustments of the sizes, without affecting the functions and purposes of the present invention, should still fall within the scope of the present invention.
Fig. 1 is a schematic flowchart of an event positioning method based on a three-dimensional geographic information scene according to an embodiment of the present application;
fig. 2 is a block diagram of an event positioning system based on a three-dimensional geographic information scene according to an embodiment of the present application.
Detailed Description
The present invention is described in terms of particular embodiments, other advantages and features of the invention will become apparent to those skilled in the art from the following disclosure, and it is to be understood that the described embodiments are merely exemplary of the invention and that it is not intended to limit the invention to the particular embodiments disclosed. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The method has the advantages that the weak links of attention personnel in the watching work are monitored, the three-dimensional video fusion technology is adopted in a targeted mode to carry out three-dimensional scene visualization and three-dimensional management and control, violation of the attention personnel and visual display of alarm events are achieved, key personnel are managed and controlled in a key mode, the police force follows the key attention personnel, and safety and controllability are achieved.
The embodiment of the application provides an event positioning method based on a three-dimensional geographic information scene, based on the three-dimensional geographic information scene, the mobile positioning data, the video monitoring pictures and the video intelligent analysis data of special attention personnel in a specific place are gathered in a unified space-time frame, the video monitoring pictures are organically organized in a splicing and fusing mode, unified reconstruction of a real world and a virtual world is formed, visual display of the three-dimensional video fusion scene is achieved through fusion of the mobile positioning data based on the personnel information and the video intelligent analysis data, and comprehensive control of emergency events in the specific place is achieved. Abnormal behavior discovery and identification of attention personnel are improved from a global view, the handling capacity is improved, and finally decision and action support is provided for management of special attention personnel in a specific place.
As shown in fig. 1, the method comprises the steps of:
step 101: the video intelligent analysis module analyzes the characteristics of the monitored pixel pictures, and if an abnormal event occurs in the current video segment, the abnormal event and the three-dimensional geographic position coordinate corresponding to the abnormal event are sent to the event three-dimensional visualization module, and the three-dimensional geographic information corresponding to the abnormal event is obtained by matching and converting the related information of the abnormal event and the three-dimensional geographic coordinate.
Step 102: the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event.
In one possible implementation, in step 102, the real-time dynamic coordinates of the mobile coordinate conversion module are determined according to the following steps:
acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area; and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
In one possible implementation, the three-dimensional video fusion scene of the video stitching fusion module 102 is determined according to the following steps:
collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment; and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
In one possible embodiment, the information related to the abnormal event includes an event type and a pixel position.
Now, the space-time situation awareness system of three-dimensional geographic information applicable to the method provided by the embodiment of the present application is further described, and includes a mobile coordinate conversion module, a video stitching fusion module, a video intelligent analysis module, and an event three-dimensional visualization module.
And the mobile coordinate conversion module is used for converting the real-time dynamic area relative coordinates generated by the RFID/UWB matched with the information of the wearing person and the three-dimensional geographic coordinates to realize the real-time dynamic coordinates based on the three-dimensional geographic information position coordinates.
And the video splicing and fusion module is used for integrally splicing the video pictures of the gun-shaped cameras, matching and converting the spliced video pictures and the three-dimensional geographic information coordinates and fusing the spliced video pictures and the three-dimensional geographic information coordinates into a three-dimensional scene, so that the conversion of the video picture coordinates and the three-dimensional geographic position coordinates is realized, and the three-dimensional video fusion scene based on the three-dimensional geographic position coordinates is realized.
And the video intelligent analysis module is used for matching and converting the analyzed event information and the three-dimensional geographic information coordinate by butting the video intelligent analysis system, so that the matching and conversion of the video intelligent analysis event information into the three-dimensional geographic position coordinate is realized. Analyzing the analysis result in the video intelligent analysis system, such as: and information such as event types, pixel positions and the like is subjected to three-dimensional geographic position coordinate calibration and unified into three-dimensional geographic information coordinates, so that the coordinate conversion of the video intelligent analysis information in a three-dimensional geographic information scene is realized.
And the event three-dimensional visualization module is used for displaying a three-dimensional geographic scene based on comprehensive information such as events, personnel and the like, and realizing matching association between mobile coordinates generated by RFID/UWB containing information of wearing personnel and video intelligent analysis information in the three-dimensional video fusion scene in a unified three-dimensional video fusion scene. And unifying the RFID/UWB positioning information matched with the information of the wearing person and the video intelligent analysis information under a three-dimensional geographic information frame, and displaying in a three-dimensional geographic video fusion scene.
Based on a three-dimensional geographic information scene, mobile positioning data, video monitoring pictures and video intelligent analysis data of special attention personnel in a specific place are subjected to unified space-time frame aggregation, the video monitoring pictures are organically organized in a splicing and fusing mode to form unified reconstruction of a real world and a virtual world, and the mobile positioning data and the video intelligent analysis data based on personnel information are fused to realize visual display of the three-dimensional video fusion scene and comprehensive control of emergency events in the specific place. Abnormal behavior discovery and identification of attention personnel are improved from a global view, the handling capacity is improved, and finally decision and action support is provided for management of special attention personnel in a specific place.
In summary, the embodiment of the present application provides an event positioning method based on a three-dimensional geographic information scene, where a video intelligent analysis module analyzes a monitored pixel picture characteristic, and if an abnormal event occurs in a current video segment, the abnormal event and a three-dimensional geographic position coordinate corresponding to the abnormal event are sent to an event three-dimensional visualization module, where the three-dimensional geographic information corresponding to the abnormal event is obtained by performing matching conversion on related information of the abnormal event and the three-dimensional geographic coordinate; the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event. The mobile positioning number based on personnel information is fused with the video intelligent analysis data, so that the visual display of a three-dimensional video fusion scene is realized, and the control on the emergency in a specific place can be realized.
Based on the same technical concept, an embodiment of the present application further provides an event positioning system based on a three-dimensional geographic information scene, as shown in fig. 2, the system includes:
the video intelligent analysis module 201 is configured to analyze pixel picture characteristics in monitoring, and if an abnormal event occurs in a current video segment, send the abnormal event and a three-dimensional geographic position coordinate corresponding to the abnormal event to an event three-dimensional visualization module, where three-dimensional geographic information corresponding to the abnormal event is obtained by performing matching conversion on related information of the abnormal event and the three-dimensional geographic coordinate.
The event three-dimensional visualization module 202 is configured to obtain a real-time dynamic coordinate of the mobile coordinate conversion module 203 according to the three-dimensional geographic position corresponding to the abnormal event, and determine a candidate corresponding to the three-dimensional geographic position area; and acquiring a three-dimensional video fusion scene of the video splicing fusion module 204 according to the three-dimensional geographic position corresponding to the abnormal event.
In a possible implementation manner, the mobile coordinate conversion module 203 is specifically configured to: acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area; and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
In a possible implementation manner, the video stitching fusion module 204 is specifically configured to: collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment; and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
In one possible embodiment, the information related to the abnormal event includes an event type and a pixel position.
Based on the same technical concept, an embodiment of the present application further provides an apparatus, including: the device comprises a data acquisition device, a processor and a memory; the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor is configured to execute one or more program instructions to perform the method.
Based on the same technical concept, the embodiment of the present application also provides a computer-readable storage medium, wherein the computer-readable storage medium contains one or more program instructions, and the one or more program instructions are used for executing the method.
In the present specification, each embodiment of the method is described in a progressive manner, and the same and similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. Reference is made to the description of the method embodiments.
It is noted that while the operations of the methods of the present invention are depicted in the drawings in a particular order, this is not a requirement or suggestion that the operations must be performed in this particular order or that all of the illustrated operations must be performed to achieve desirable results. Additionally or alternatively, certain steps may be omitted, multiple steps combined into one step execution, and/or one step broken down into multiple step executions.
Although the present application provides method steps as in embodiments or flowcharts, additional or fewer steps may be included based on conventional or non-inventive approaches. The order of steps recited in the embodiments is merely one manner of performing the steps in a multitude of orders and does not represent the only order of execution. When an apparatus or client product in practice executes, it may execute sequentially or in parallel (e.g., in a parallel processor or multithreaded processing environment, or even in a distributed data processing environment) according to the embodiments or methods shown in the figures. The terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or apparatus that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or apparatus. Without further limitation, the presence of additional identical or equivalent elements in a process, method, article, or apparatus that comprises the recited elements is not excluded.
The units, devices, modules, etc. set forth in the above embodiments may be implemented by a computer chip or an entity, or by a product with certain functions. For convenience of description, the above devices are described as being divided into various modules by functions, and are described separately. Of course, in implementing the present application, the functions of each module may be implemented in one or more software and/or hardware, or a module implementing the same function may be implemented by a combination of a plurality of sub-modules or sub-units, and the like. The above-described embodiments of the apparatus are merely illustrative, and for example, the division of the units is only one logical division, and other divisions may be realized in practice, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not executed. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
Those skilled in the art will also appreciate that, in addition to implementing the controller as pure computer readable program code, the same functionality can be implemented by logically programming method steps such that the controller is in the form of logic gates, switches, application specific integrated circuits, programmable logic controllers, embedded microcontrollers and the like. Such a controller may therefore be considered as a hardware component, and the means included therein for performing the various functions may also be considered as a structure within the hardware component. Or even means for performing the functions may be regarded as being both a software module for performing the method and a structure within a hardware component.
The application may be described in the general context of computer-executable instructions, such as program modules, being executed by a computer. Generally, program modules include routines, programs, objects, components, data structures, classes, etc. that perform particular tasks or implement particular abstract data types. The application may also be practiced in distributed computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed computing environment, program modules may be located in both local and remote computer storage media including memory storage devices.
From the above description of the embodiments, it is clear to those skilled in the art that the present application can be implemented by software plus necessary general hardware platform. Based on such understanding, the technical solutions of the present application may be embodied in the form of a software product, which may be stored in a storage medium, such as a ROM/RAM, a magnetic disk, an optical disk, or the like, and includes several instructions for enabling a computer device (which may be a personal computer, a mobile terminal, a server, or a network device) to execute the method according to the embodiments or some parts of the embodiments of the present application.
The embodiments in the present specification are described in a progressive manner, and the same or similar parts among the embodiments are referred to each other, and each embodiment focuses on the differences from the other embodiments. The application is operational with numerous general purpose or special purpose computing system environments or configurations. For example: personal computers, server computers, hand-held or portable devices, tablet-type devices, multiprocessor systems, microprocessor-based systems, set top boxes, programmable electronic devices, network PCs, minicomputers, mainframe computers, distributed computing environments that include any of the above systems or devices, and the like.
The above-mentioned embodiments are further described in detail for the purpose of illustrating the invention, and it should be understood that the above-mentioned embodiments are only illustrative of the present invention and are not intended to limit the scope of the present invention, and any modifications, equivalent substitutions, improvements, etc. made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (10)

1. An event positioning method based on a three-dimensional geographic information scene is characterized by comprising the following steps:
the video intelligent analysis module analyzes the characteristics of a monitored pixel picture, and if an abnormal event occurs in a current video segment, the abnormal event and a three-dimensional geographic position coordinate corresponding to the abnormal event are sent to an event three-dimensional visualization module, wherein the three-dimensional geographic information corresponding to the abnormal event is obtained by matching and converting the related information of the abnormal event and the three-dimensional geographic coordinate;
the event three-dimensional visualization module acquires real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position region corresponding to the abnormal event, and determines a candidate corresponding to the three-dimensional geographic position; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event.
2. The method of claim 1, wherein the real-time dynamic coordinates of the mobile coordinate conversion module are determined according to the steps of:
acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area;
and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
3. The method of claim 1, wherein the three-dimensional video fusion scene of the video stitching fusion module is determined according to the following steps:
collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment;
and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
4. The method of claim 1, wherein the information related to the exception event comprises an event type and a pixel location.
5. An event positioning system based on three-dimensional geographic information scene, characterized in that the system comprises:
the video intelligent analysis module is used for analyzing the pixel picture characteristics in monitoring, and if an abnormal event occurs in the current video segment, the abnormal event and the three-dimensional geographic position coordinate corresponding to the abnormal event are sent to the event three-dimensional visualization module, and the three-dimensional geographic information corresponding to the abnormal event is obtained by matching and converting the related information of the abnormal event and the three-dimensional geographic coordinate;
the event three-dimensional visualization module is used for acquiring the real-time dynamic coordinates of the mobile coordinate conversion module according to the three-dimensional geographic position corresponding to the abnormal event and determining a candidate corresponding to the three-dimensional geographic position area; and acquiring a three-dimensional video fusion scene of the video splicing fusion module according to the three-dimensional geographic position corresponding to the abnormal event.
6. The system of claim 5, wherein the mobile coordinate transformation module is specifically configured to:
acquiring real-time positioning information which is transmitted by a target electronic wrist strap system and is based on a UWB technology and/or an RFID technology, wherein the positioning information comprises relative coordinates of a real-time dynamic area;
and determining a real-time dynamic coordinate based on the three-dimensional geographic information position coordinate according to the real-time positioning information and the three-dimensional geographic coordinate.
7. The system of claim 5, wherein the video stitching fusion module is specifically configured to:
collecting video pictures of a plurality of gun-shaped cameras, and carrying out correction and splicing treatment;
and matching and converting the spliced video picture and the three-dimensional geographic information coordinate to obtain a three-dimensional video fusion scene based on the three-dimensional geographic position coordinate.
8. The system of claim 5, wherein the information related to the exception event includes an event type and a pixel location.
9. An apparatus, characterized in that the apparatus comprises: the device comprises a data acquisition device, a processor and a memory;
the data acquisition device is used for acquiring data; the memory is to store one or more program instructions; the processor, configured to execute one or more program instructions to perform the method of any of claims 1-4.
10. A computer-readable storage medium having one or more program instructions embodied therein for performing the method of any of claims 1-4.
CN202011065487.3A 2020-09-30 2020-09-30 Event positioning method and system based on three-dimensional geographic information scene Pending CN112364950A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011065487.3A CN112364950A (en) 2020-09-30 2020-09-30 Event positioning method and system based on three-dimensional geographic information scene

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011065487.3A CN112364950A (en) 2020-09-30 2020-09-30 Event positioning method and system based on three-dimensional geographic information scene

Publications (1)

Publication Number Publication Date
CN112364950A true CN112364950A (en) 2021-02-12

Family

ID=74507085

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011065487.3A Pending CN112364950A (en) 2020-09-30 2020-09-30 Event positioning method and system based on three-dimensional geographic information scene

Country Status (1)

Country Link
CN (1) CN112364950A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051341A (en) * 2023-03-16 2023-05-02 广东泰一高新技术发展有限公司 Two-dimensional and three-dimensional geographic information integrated analysis system, method, equipment and terminal
CN117541957A (en) * 2023-11-08 2024-02-09 继善(广东)科技有限公司 Method, system and medium for generating event solving strategy based on artificial intelligence

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753992A (en) * 2008-12-17 2010-06-23 深圳市先进智能技术研究所 Multi-mode intelligent monitoring system and method
CN108303673A (en) * 2018-02-01 2018-07-20 杭州球帆科技有限公司 A kind of UWB 3 D positioning systems based on video auxiliary positioning
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information
CN109525816A (en) * 2018-12-10 2019-03-26 北京智汇云舟科技有限公司 A kind of more ball fusion linked systems of multiple gun based on three-dimensional geographic information and method
CN110879964A (en) * 2019-10-08 2020-03-13 北京智汇云舟科技有限公司 Large scene density analysis system and method based on three-dimensional geographic information
CN111105505A (en) * 2019-11-25 2020-05-05 北京智汇云舟科技有限公司 Method and system for quickly splicing dynamic images of holder based on three-dimensional geographic information
CN111586351A (en) * 2020-04-20 2020-08-25 上海市保安服务(集团)有限公司 Visual monitoring system and method for fusion of three-dimensional videos of venue

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101753992A (en) * 2008-12-17 2010-06-23 深圳市先进智能技术研究所 Multi-mode intelligent monitoring system and method
CN108303673A (en) * 2018-02-01 2018-07-20 杭州球帆科技有限公司 A kind of UWB 3 D positioning systems based on video auxiliary positioning
CN109068103A (en) * 2018-09-17 2018-12-21 北京智汇云舟科技有限公司 Dynamic video space-time virtual reality fusion method and system based on three-dimensional geographic information
CN109525816A (en) * 2018-12-10 2019-03-26 北京智汇云舟科技有限公司 A kind of more ball fusion linked systems of multiple gun based on three-dimensional geographic information and method
CN110879964A (en) * 2019-10-08 2020-03-13 北京智汇云舟科技有限公司 Large scene density analysis system and method based on three-dimensional geographic information
CN111105505A (en) * 2019-11-25 2020-05-05 北京智汇云舟科技有限公司 Method and system for quickly splicing dynamic images of holder based on three-dimensional geographic information
CN111586351A (en) * 2020-04-20 2020-08-25 上海市保安服务(集团)有限公司 Visual monitoring system and method for fusion of three-dimensional videos of venue

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116051341A (en) * 2023-03-16 2023-05-02 广东泰一高新技术发展有限公司 Two-dimensional and three-dimensional geographic information integrated analysis system, method, equipment and terminal
CN117541957A (en) * 2023-11-08 2024-02-09 继善(广东)科技有限公司 Method, system and medium for generating event solving strategy based on artificial intelligence

Similar Documents

Publication Publication Date Title
US10055869B2 (en) Enhanced reality system for visualizing, evaluating, diagnosing, optimizing and servicing smart grids and incorporated components
AU2017436901B2 (en) Methods and apparatus for automated surveillance systems
da Silva Avanzi et al. A framework for interoperability assessment in crisis management
US20070008408A1 (en) Wide area security system and method
CN106331633A (en) Method and system for displaying and quickly accessing a variety of monitoring resources
AU2011352414A1 (en) Inference engine for video analytics metadata-based event detection and forensic search
US8917909B2 (en) Surveillance including a modified video data stream
CN112364950A (en) Event positioning method and system based on three-dimensional geographic information scene
CN106454253A (en) Method and system for detecting area wandering
Sung et al. Design of an intelligent video surveillance system for crime prevention: applying deep learning technology
KR20120068611A (en) Apparatus and method for security situation awareness and situation information generation based on spatial linkage of physical and it security
CN111429583A (en) Space-time situation perception method and system based on three-dimensional geographic information
CN110650322A (en) Security and protection system based on cloud service, Internet of things and AR security and protection glasses
CN114581827A (en) Abnormal behavior early warning system, method, equipment and medium
CN114187541A (en) Intelligent video analysis method and storage device for user-defined service scene
Starke et al. Workflows and individual differences during visually guided routine tasks in a road traffic management control room
CN110311946A (en) Business datum security processing, the apparatus and system calculated based on cloud and mist
CN112953952A (en) Industrial security situation awareness method, platform, electronic device and storage medium
Isern et al. A cyber-physical system for integrated remote control and protection of smart grid critical infrastructures
CN115620208A (en) Power grid safety early warning method and device, computer equipment and storage medium
d'Angelo et al. CamInSens-An intelligent in-situ security system for public spaces
Dbouk et al. CityPro: city-surveillance collaborative platform
Lys et al. Development of a Video Surveillance System for Motion Detection and Object Recognition
CN104966264A (en) Security big data based on text serving as transformation basic state of heterogeneous data for fusion processing
Gibson et al. A situational awareness framework for improving earthquake response, recovery and resilience

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination