CN105721826A - Intelligent combat system - Google Patents

Intelligent combat system Download PDF

Info

Publication number
CN105721826A
CN105721826A CN201410723460.7A CN201410723460A CN105721826A CN 105721826 A CN105721826 A CN 105721826A CN 201410723460 A CN201410723460 A CN 201410723460A CN 105721826 A CN105721826 A CN 105721826A
Authority
CN
China
Prior art keywords
video
geographical position
address
storage address
memorizer
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410723460.7A
Other languages
Chinese (zh)
Other versions
CN105721826B (en
Inventor
胡晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Original Assignee
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd filed Critical SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority to CN201410723460.7A priority Critical patent/CN105721826B/en
Publication of CN105721826A publication Critical patent/CN105721826A/en
Application granted granted Critical
Publication of CN105721826B publication Critical patent/CN105721826B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses an intelligent combat system comprising a first video monitoring device at a first geographic location, a second video monitoring device at a second geographic location, a client host, an intelligent combat platform server, a video storage server, and a video abstract retrieval server. The client side host selects a target video object. The video storage server respectively stores a first video and a second video in a first storage address and in a second storage address according to the first geographic location and the second geographic location. The video abstract retrieval server judges the moving direction of the target video object according to the first storage address and the second storage address, and marks the moving track of the target video object on a map according to the time points of the target video object appearing in the first video and in the second video and the moving direction. Compared with the prior art, the intelligent combat system of the invention improves the efficiency of video evidence acquisition of the public security department.

Description

A kind of intelligence system under battle conditions
Technical field
The present invention relates to intelligent communication field, be specifically related to a kind of intelligence system under battle conditions.
Background technology
Intelligence platform under battle conditions is to process based on intellectualized algorithm technology by video image intelligent analysis technology and video image, and laminating public security video investigation business closely provides the application platform system of a set of " systematization, networking, intellectuality " for case video analysis.Existing intelligence platform under battle conditions can transfer various places video when public security system needs, for public security officer's reference.But, photographic head distribution in various places is wide, quantity is many, and public security officer usually needs to spend the more time to go to search.Thus, the work efficiency of concrete case is reduced.
Summary of the invention
The technical problem to be solved in the present invention is in that to provide a kind of intelligence system under battle conditions, to improve public security department's work efficiency for concrete case.
For solving above-mentioned technical problem, the present invention adopts the following technical scheme that
The invention provides a kind of intelligence system under battle conditions, it is characterised in that described intelligence system under battle conditions includes:
First video monitoring devices, for gathering first video in the first geographical position;
Second video monitoring devices, for gathering second video in the second geographical position;
Client host, described client host is communicated with described first video monitoring devices and described second video monitoring devices by network, described client host receives described first video and described second video from described first video monitoring devices and described second video monitoring devices respectively, and described client host also selects target video object;
The intelligence being connected with described client host Platform Server under battle conditions, described first video and described second video are uploaded to described intelligence Platform Server under battle conditions by described client host;
With the described intelligence video storage server that Platform Server is connected under battle conditions, described video storage server replicates described first video and described second video from described intelligence under battle conditions Platform Server, and according to described first geographical position and described second geographical position, described first video and described second video is stored respectively in the first storage address and the second storage address;And
The video frequency abstract retrieval server being connected with described video storage server and described client host, for judging the traffic direction of described target video object according to described first storage address and described second storage address, and on map, the running orbit of described target video object is marked according to the described target video object time point at described first video and described second video and described traffic direction
Described client host shows the summary traffic direction figure of described target video object on a display screen according to described traffic direction, and marks the map of the running orbit of described target video object in described display screen display according to described running orbit.
In one embodiment, described video storage server includes:
Memorizer, described memorizer includes multiple memory element;
The processing module being connected with described memorizer, described first video is stored in described memorizer address and is the memory element (namely the first storage address is D1D2D3D4) of D1D2D3D4 by described processing module, judge the relative bearing in described second geographical position and described first geographical position, and judge described second storage address according to described relative bearing.
In one embodiment, when described second geographical position is in the direct north in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2D3D4) of (D1+1) D2D3D4 by described processing module;When described second geographical position is in the direction, due south in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1 (D2+1) D3D4) of D1 (D2+1) D3D4 by described processing module;When described second geographical position is in the direction, due east in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1D2 (D3+1) D4) of D1D2 (D3+1) D4 by described processing module;When described second geographical position be in the positive west in described first geographical position to time, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1D2D3 (D4+1)) of D1D2D3 (D4+1) by described processing module.
In one embodiment, when described second geographical position is in the northeastward in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 by described processing module;When described second geographical position is in the southwestward in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1 (D2+1) D3 (D4+1)) of D1 (D2+1) D3 (D4+1) by described processing module;When described second geographical position is in the southeastern direction in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 by described processing module;When described second geographical position is in the direction, northwest in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2D3 (D4+1)) of (D1+1) D2D3 (D4+1) by described processing module.
In one embodiment, described video frequency abstract retrieval server includes:
Retrieval module, for retrieving the first frame occurring described target video object in described first video, and for retrieving the first frame occurring described target video object in described second video;
Time comparison module, for the time order and function that described first frame in relatively described first video and described first frame in described second video occur;And
Bitmap labeling module, for producing to represent the bitmap of the direction of motion of described target video object, and marks the direction of motion of described target video object according to described time comparative result on described bitmap.
Compared with prior art, video storage server just carries out the arrangement of corresponding memory element when storage operation for video-unit position.When video frequency abstract retrieval server transfers the video information of the first video-unit and the second video-unit, video frequency abstract retrieval server can be fallen into a trap at bitmap according to storage positional information and be calculated the relative position of two video-units, and by retrieving and putting into practice contrast operation, draw bitmap relative direction.Thus, before transferring concrete concentration video, user just can obtain the concrete orientation orientation information of destination object, also provides reference for video concentration.Thus, improve the image processing efficiency of system, also improve the case handling efficiency of public security department.
Accompanying drawing explanation
Fig. 1 show intelligence system under battle conditions according to an embodiment of the invention.
Fig. 2 show video storage server according to an embodiment of the invention.
Fig. 3 show video frequency abstract retrieval server according to an embodiment of the invention.
Fig. 4 show the workflow diagram of intelligence system under battle conditions according to an embodiment of the invention.
Fig. 5 show the workflow diagram of video storage server according to an embodiment of the invention.
Fig. 6 show the workflow diagram of video storage server according to an embodiment of the invention.
Fig. 7 show the workflow diagram of video frequency abstract retrieval server according to an embodiment of the invention
Detailed description of the invention
Hereinafter embodiments of the invention will be provided detailed description.Although the present invention will be illustrated in conjunction with some detailed description of the invention and illustrate, but it should be noted that the present invention is not merely confined to these embodiments.On the contrary, the present invention is carried out amendment or equivalent replace, all should be encompassed in the middle of scope of the presently claimed invention.
It addition, in order to better illustrate the present invention, detailed description of the invention below gives numerous details.It will be understood by those skilled in the art that there is no these details, the equally possible enforcement of the present invention.In other example, known method, flow process, element and circuit are not described in detail, in order to highlight the purport of the present invention.
Fig. 1 show intelligence system 100 under battle conditions according to an embodiment of the invention.Intelligence system 100 under battle conditions includes multiple video monitoring devices 104,105,106 and 107.Video monitoring devices can be photographic head, sky net monitor or other can shoot with video-corder the monitoring arrangement of video.Emphasis of the present invention is analyzed for the video of two camera collections and describes, for instance: lay respectively at the first video monitoring devices 104 and the second video monitoring devices 106 of primary importance and the second position.First video monitoring devices 104 gathers first video in the first geographical position.Second video monitoring devices 106 gathers second video in the second geographical position.
Additionally, intelligence system 100 under battle conditions also includes client host 102, intelligence Platform Server 108, video storage server 110 and video frequency abstract retrieval server 112 under battle conditions.Client host 102 is communicated by network and the first video monitoring devices 104 and the second video monitoring devices 106.Client host 102 receives described first video and described second video from the first video monitoring devices 104 and the second video monitoring devices 106 respectively.
Client host 102 also selects target video object.In one embodiment, user 101 operates the feature of client host 102 typing target video object, for instance: the information such as the identity of suspect, portrait, clothing, thus, client host 102 may determine that the target video object needing locking.
Client host 102 receives the first video and the second video from the first video monitoring devices 104 and the second video monitoring devices 106.And the first video and the second video are uploaded to intelligence Platform Server 108 under battle conditions.Video storage server 110 replicates the first video and the second video from intelligence under battle conditions Platform Server 108, and according to described first geographical position and described second geographical position, described first video and described second video is stored respectively in the first storage address and the second storage address.Video frequency abstract retrieval server 112 is connected with video storage server 110 and client host 102.Video frequency abstract retrieval server 112 stores address according to first and the second storage address judges the traffic direction of target video object, and marks the running orbit of described target video object on map according to the target video object time point at the first video and the second video and described traffic direction.Client host 102 shows the summary traffic direction figure of described target video object on a display screen according to described traffic direction, and marks the map of the running orbit of described target video object in described display screen display according to described running orbit.
Fig. 2 show video storage server 110 according to an embodiment of the invention.In one embodiment, video storage server 110 includes memorizer 202 and processing module 204.Memorizer 202 includes multiple memory element.Processing module 204 is connected with memorizer 202.First video is stored in memorizer address and is the memory element (namely the first storage address is D1D2D3D4) of D1D2D3D4 by processing module 204, judge the relative bearing in the second geographical position and the first geographical position, and judge described second storage address according to described relative bearing.
Specifically, when described second geographical position is in the direct north in the first geographical position, described second video is stored in memorizer 202 address and is the memory element (namely the second storage address is (D1+1) D2D3D4) of (D1+1) D2D3D4 by processing module 204;When the second geographical position is in the direction, due south in the first geographical position, the second video is stored in memorizer 202 address and is the memory element (namely the second storage address is D1 (D2+1) D3D4) of D1 (D2+1) D3D4 by processing module 204;When the second geographical position is in the direction, due east in the first geographical position, the second video is stored in memorizer 202 address and is the memory element (namely the second storage address is D1D2 (D3+1) D4) of D1D2 (D3+1) D4 by processing module;When the second geographical position be in the positive west in the first geographical position to time, the second video is stored in memorizer 202 address and is the memory element (namely the second storage address is D1D2D3 (D4+1)) of D1D2D3 (D4+1) by processing module 204.
In addition, when the second geographical position is in the northeastward in the first geographical position, second video is stored in memorizer 202 address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 when the second geographical position is in the southwestward in the first geographical position by processing module 204, second video is stored in memorizer 202 address and is the memory element (namely the second storage address is D1 (D2+1) D3 (D4+1)) of D1 (D2+1) D3 (D4+1) when the second geographical position is in the southeastern direction in the first geographical position by processing module 204, second video is stored in memorizer 202 address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 by processing module 204;When the second geographical position is in the direction, northwest in the first geographical position, the second video is stored in memorizer 202 address and is the memory element (namely the second storage address is (D1+1) D2D3 (D4+1)) of (D1+1) D2D3 (D4+1) by processing module 204.
Fig. 3 show video frequency abstract retrieval server 112 according to an embodiment of the invention.Video frequency abstract retrieval server 112 includes retrieval module 302, time comparison module 304 and bitmap labeling module 306.Retrieval module 302 retrieves the first frame occurring described target video object in described first video, and for retrieving the first frame that described target video object occurs in described second video.The time order and function that described first frame in time comparison module 304 relatively described first video and described first frame in described second video occur.Bitmap labeling module 306 produces to represent the bitmap of the direction of motion of described target video object, and marks the direction of motion of described target video object on described bitmap according to described time comparative result.
Advantage is in that, video storage server 110 just carries out the arrangement of corresponding memory element when storage operation for video-unit position.When video frequency abstract retrieval server 112 transfers the video information of the first video-unit and the second video-unit, video frequency abstract retrieval server 112 can be fallen into a trap at bitmap according to storage positional information and be calculated the relative position of two video-units, and by retrieving and putting into practice contrast operation, draw bitmap relative direction.Thus, before transferring concrete concentration video, user 101 just can obtain the concrete orientation orientation information of destination object, also provides reference for video concentration.Thus, improve the image processing efficiency of system, also improve the case handling efficiency of public security department.
Fig. 4 show the workflow diagram 400 of intelligence system 100 under battle conditions according to an embodiment of the invention.Fig. 4 describes a kind of method monitoring target video object.
In step 402, first video in the first geographical position is gathered.In step 404, second video in the second geographical position is gathered.In a step 406, target video object is selected.In a step 408, according to described first geographical position and described second geographical position, described first video and described second video are stored respectively in the first storage address and second and store address.In step 410, the traffic direction of described target video object is judged according to described first storage address and described second storage address.In step 412, on map, the running orbit of described target video object is marked according to the described target video object time point at described first video and described second video and described traffic direction.In step 414, the summary traffic direction figure of described target video object is shown according to described traffic direction.In step 416, the map of the running orbit of described target video object is marked according to the display of described running orbit.
Fig. 5 show the workflow diagram 408 of video storage server 110 according to an embodiment of the invention.Step 408 in Fig. 4 is described more specifically by Fig. 5.In step 502, being stored in memorizer by described first video address is the memory element of D1D2D3D4 (namely the first storage address is D1D2D3D4).In step 504, it is judged that the relative bearing in described second geographical position and described first geographical position, and according to described relative bearing described second storage address is judged.In step 506, when described second geographical position is in the direct north in described first geographical position, then enter step 507, be stored in described memorizer by described second video address and be the memory element (namely second store address be (D1+1) D2D3D4) of (D1+1) D2D3D4.Otherwise, step 508 is entered.
In step 508, when described second geographical position is in the direction, due south in described first geographical position, then enter step 509, be stored in described memorizer by described second video address and be the memory element (namely second store address be D1 (D2+1) D3D4) of D1 (D2+1) D3D4.Otherwise, step 510 is entered.
In step 510, when described second geographical position is in the direction, due east in described first geographical position, then enter step 511, be stored in described memorizer by described second video address and be the memory element (namely second store address be D1D2 (D3+1) D4) of D1D2 (D3+1) D4.Otherwise, step 512 is entered.
In step 512, when described second geographical position be in the positive west in described first geographical position to time, then enter step 513, be stored in described memorizer by described second video address and be the memory element (namely second store address be D1D2D3 (D4+1)) of D1D2D3 (D4+1).Otherwise, enter step 514, check other non-positive directions further.
Fig. 6 show the workflow diagram 514 of video storage server 110 according to an embodiment of the invention.Step 514 in Fig. 5 is further described by Fig. 6.
In step 602, when described second geographical position is in the northeastward in described first geographical position, then enter step 603, be stored in described memorizer by described second video address and be the memory element (namely second store address be (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4.Otherwise, step 604 is entered.
In step 604, when described second geographical position is in the southwestward in described first geographical position, then enter step 605, be stored in described memorizer by described second video address and be the memory element (namely second store address be D1 (D2+1) D3 (D4+1)) of D1 (D2+1) D3 (D4+1).Otherwise, step 606 is entered.
In step 606, when described second geographical position is in the southeastern direction in described first geographical position, then enter step 607, be stored in described memorizer by described second video address and be the memory element (namely second store address be (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4.Otherwise, step 608 is entered.
In step 608, it is determined that described second geographical position is in the direction, northwest in described first geographical position.In step 609, it is stored in described memorizer by described second video address and is the memory element (namely the second storage address is (D1+1) D2D3 (D4+1)) of (D1+1) D2D3 (D4+1).
Fig. 7 show the workflow diagram 410 of video frequency abstract retrieval server 112 according to an embodiment of the invention.Step 410 in Fig. 4 is further described by Fig. 7.In a step 702, the first frame that described target video object occurs in described first video is retrieved.In step 704, the first frame that described target video object occurs in described second video is retrieved.In step 706, described first frame in described first video and the time order and function of the described first frame appearance in described second video are compared.In step 708, produce to represent the bitmap of the direction of motion of described target video object.In step 720, on described bitmap, the direction of motion of described target video object is marked according to described time comparative result.
Advantage is in that, by performing the step of Fig. 4 to Fig. 7, just carries out the arrangement of corresponding memory element for video-unit position when storage operation.Thus, just can fall into a trap at bitmap according to storage positional information when transferring the video information of the first video-unit and the second video-unit and calculate the relative position of two video-units, and by retrieving and putting into practice contrast operation, draw bitmap relative direction.Thus, before carrying out video concentration, user 101 just can obtain the concrete orientation orientation information of destination object, also provides reference for video concentration.Thus, improve the image processing efficiency of system, also improve the case handling efficiency of public security department.
Embodiment and accompanying drawing are only the conventional embodiment of the present invention specifically above.Obviously, can there be various supplement, amendment and replacement under the premise of the present invention spirit defined without departing from claims and invention scope.It should be appreciated by those skilled in the art that the present invention can be varied from form, structure, layout, ratio, material, element, assembly and other side under the premise without departing substantially from invention criterion according to concrete environment and job requirement in actual applications.Therefore, at this, the embodiment of disclosure is merely to illustrate and unrestricted, and the scope of the present invention is defined by appended claim and legal equivalents thereof, and is not limited to description before this.

Claims (5)

1. an intelligence system under battle conditions, it is characterised in that described intelligence system under battle conditions includes:
First video monitoring devices, for gathering first video in the first geographical position;
Second video monitoring devices, for gathering second video in the second geographical position;
Client host, described client host is communicated with described first video monitoring devices and described second video monitoring devices by network, described client host receives described first video and described second video from described first video monitoring devices and described second video monitoring devices respectively, and described client host also selects target video object;
The intelligence being connected with described client host Platform Server under battle conditions, described first video and described second video are uploaded to described intelligence Platform Server under battle conditions by described client host;
With the described intelligence video storage server that Platform Server is connected under battle conditions, described video storage server replicates described first video and described second video from described intelligence under battle conditions Platform Server, and according to described first geographical position and described second geographical position, described first video and described second video is stored respectively in the first storage address and the second storage address;And
The video frequency abstract retrieval server being connected with described video storage server and described client host, for judging the traffic direction of described target video object according to described first storage address and described second storage address, and on map, the running orbit of described target video object is marked according to the described target video object time point at described first video and described second video and described traffic direction
Described client host shows the summary traffic direction figure of described target video object on a display screen according to described traffic direction, and marks the map of the running orbit of described target video object in described display screen display according to described running orbit.
2. intelligence according to claim 1 system under battle conditions, it is characterised in that described video storage server includes:
Memorizer, described memorizer includes multiple memory element;
The processing module being connected with described memorizer, described first video is stored in described memorizer address and is the memory element (namely the first storage address is D1D2D3D4) of D1D2D3D4 by described processing module, judge the relative bearing in described second geographical position and described first geographical position, and judge described second storage address according to described relative bearing.
3. intelligence according to claim 2 system under battle conditions, it is characterized in that, when described second geographical position is in the direct north in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2D3D4) of (D1+1) D2D3D4 by described processing module;When described second geographical position is in the direction, due south in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1 (D2+1) D3D4) of D1 (D2+1) D3D4 by described processing module;When described second geographical position is in the direction, due east in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1D2 (D3+1) D4) of D1D2 (D3+1) D4 by described processing module;When described second geographical position be in the positive west in described first geographical position to time, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1D2D3 (D4+1)) of D1D2D3 (D4+1) by described processing module.
4. intelligence according to claim 3 system under battle conditions, it is characterized in that, when described second geographical position is in the northeastward in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 by described processing module;When described second geographical position is in the southwestward in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is D1 (D2+1) D3 (D4+1)) of D1 (D2+1) D3 (D4+1) by described processing module;When described second geographical position is in the southeastern direction in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2 (D3+1) D4) of (D1+1) D2 (D3+1) D4 by described processing module;When described second geographical position is in the direction, northwest in described first geographical position, described second video is stored in described memorizer address and is the memory element (namely the second storage address is (D1+1) D2D3 (D4+1)) of (D1+1) D2D3 (D4+1) by described processing module.
5. intelligence according to claim 4 system under battle conditions, it is characterised in that described video frequency abstract retrieval server includes:
Retrieval module, for retrieving the first frame occurring described target video object in described first video, and for retrieving the first frame occurring described target video object in described second video;
Time comparison module, for the time order and function that described first frame in relatively described first video and described first frame in described second video occur;And
Bitmap labeling module, for producing to represent the bitmap of the direction of motion of described target video object, and marks the direction of motion of described target video object according to described time comparative result on described bitmap.
CN201410723460.7A 2014-12-02 2014-12-02 A kind of intelligence actual combat system Active CN105721826B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410723460.7A CN105721826B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410723460.7A CN105721826B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat system

Publications (2)

Publication Number Publication Date
CN105721826A true CN105721826A (en) 2016-06-29
CN105721826B CN105721826B (en) 2018-06-12

Family

ID=56146761

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410723460.7A Active CN105721826B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat system

Country Status (1)

Country Link
CN (1) CN105721826B (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709972A (en) * 2020-06-11 2020-09-25 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314078A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Object monitoring apparatus and method thereof, camera apparatus and monitoring system
CN103021186A (en) * 2012-12-28 2013-04-03 中国科学技术大学 Vehicle monitoring method and vehicle monitoring system
CN103096185A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device of video abstraction generation
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120314078A1 (en) * 2011-06-13 2012-12-13 Sony Corporation Object monitoring apparatus and method thereof, camera apparatus and monitoring system
CN103021186A (en) * 2012-12-28 2013-04-03 中国科学技术大学 Vehicle monitoring method and vehicle monitoring system
CN103096185A (en) * 2012-12-30 2013-05-08 信帧电子技术(北京)有限公司 Method and device of video abstraction generation
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111709972A (en) * 2020-06-11 2020-09-25 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video
CN111709972B (en) * 2020-06-11 2022-03-11 石家庄铁道大学 Space constraint-based method for quickly concentrating wide-area monitoring video

Also Published As

Publication number Publication date
CN105721826B (en) 2018-06-12

Similar Documents

Publication Publication Date Title
US10289940B2 (en) Method and apparatus for providing classification of quality characteristics of images
Geraldes et al. UAV-based situational awareness system using deep learning
US20180373940A1 (en) Image Location Through Large Object Detection
WO2017024975A1 (en) Unmanned aerial vehicle portable ground station processing method and system
CN106547814A (en) A kind of power transmission line unmanned machine patrols and examines the structuring automatic archiving method of image
JP5234476B2 (en) Method, system and computer readable recording medium for performing image matching on panoramic images using a graph structure
TW201139990A (en) Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods
CN103679730A (en) Video abstract generating method based on GIS
CN104486585B (en) A kind of city magnanimity monitor video management method and system based on GIS
CN108683877A (en) Distributed massive video resolution system based on Spark
CN105120237A (en) Wireless image monitoring method based on 4G technology
CN105141924A (en) Wireless image monitoring system based on 4G technology
TW201145983A (en) Video processing system providing correlation between objects in different georeferenced video feeds and related methods
US9836826B1 (en) System and method for providing live imagery associated with map locations
CN111339893A (en) Pipeline detection system and method based on deep learning and unmanned aerial vehicle
CN114255407B (en) High-resolution-based anti-unmanned aerial vehicle multi-target identification and tracking video detection method
TW201142751A (en) Video processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
CN112815923A (en) Visual positioning method and device
CN105809108A (en) Pedestrian positioning method and system based on distributed vision
CN101977206A (en) Mobile routing inspection geographical information system based on GML (Generalized Markup Language) and Web Services and realization method thereof
Lin et al. Moving camera analytics: Emerging scenarios, challenges, and applications
Zhu et al. PairCon-SLAM: Distributed, online, and real-time RGBD-SLAM in large scenarios
CN110956115B (en) Scene recognition method and device
CN105721826A (en) Intelligent combat system
CN105282496A (en) Method for tracking target video object

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant