CN105227902B - A kind of intelligence actual combat monitoring method - Google Patents

A kind of intelligence actual combat monitoring method Download PDF

Info

Publication number
CN105227902B
CN105227902B CN201410720917.9A CN201410720917A CN105227902B CN 105227902 B CN105227902 B CN 105227902B CN 201410720917 A CN201410720917 A CN 201410720917A CN 105227902 B CN105227902 B CN 105227902B
Authority
CN
China
Prior art keywords
video
storage address
video object
geographical location
target video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201410720917.9A
Other languages
Chinese (zh)
Other versions
CN105227902A (en
Inventor
胡晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Original Assignee
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd filed Critical SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority to CN201410720917.9A priority Critical patent/CN105227902B/en
Publication of CN105227902A publication Critical patent/CN105227902A/en
Application granted granted Critical
Publication of CN105227902B publication Critical patent/CN105227902B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Abstract

The invention discloses a kind of intelligent actual combat monitoring methods, including:Acquire multiple videos in multiple geographical locations;Selection target the video object;Read the geographical location information of the multiple video;Extract the longitude and latitude in the multiple geographical location;The multiple longitude and the multiple latitude are ranked up respectively, determine that multiple storage address are numbered according to the ranking results;The multiple video is stored respectively in multiple storage address corresponding with the multiple storage address number according to the geographical location of the multiple video;The traffic direction of the target video object is judged according to the multiple storage address;The time point and the traffic direction occurred according to the target video object in the multiple video marks the running track of the target video object on map;The abstract traffic direction figure of the target video object is shown according to the traffic direction, and the map for being labeled with the running track of the target video object is shown according to the running track.

Description

A kind of intelligence actual combat monitoring method
Technical field
The present invention relates to the communications fields, and in particular to a kind of intelligence actual combat monitoring method.
Background technique
Intelligent actual combat platform is using video image intelligent analysis technology and video image processing intellectualized algorithm technology as base Plinth is bonded public security video investigation business closely, provides answering for a set of " systematization, networking, intelligence " for case video analysis Use plateform system.Existing intelligence actual combat platform can transfer various regions video in time when public security department settles a case, and supply Public security officer's reference.However, camera distribution in various regions is wide, quantity is more, public security officer usually requires a great deal of time and looks into Look for target object.The work efficiency of public security case is reduced as a result,.
Summary of the invention
The technical problem to be solved in the present invention is that a kind of method for tracking target video object is provided, to improve at video Efficiency is managed, user's case handling efficiency is improved.
In order to solve the above technical problems, the present invention adopts the following technical scheme that:
The present invention provides a kind of intelligent actual combat monitoring methods, which is characterized in that the intelligence actual combat monitoring method includes Following steps:
Acquire multiple videos in multiple geographical locations;
Selection target the video object;
Read the geographical location information of the multiple video;
Extract the longitude and latitude in the multiple geographical location;
The multiple longitude and the multiple latitude are ranked up respectively, determine multiple storages according to the ranking results Address number;
The multiple video is stored respectively in and the multiple storage address according to the geographical location of the multiple video Number corresponding multiple storage address;
The traffic direction of the target video object is judged according to the multiple storage address;
The time point occurred according to the target video object in the multiple video and the traffic direction are on map Mark the running track of the target video object;And
The abstract traffic direction figure of the target video object is shown according to the traffic direction, and according to the operation rail Mark shows the map for being labeled with the running track of the target video object.
In one embodiment, each storage address number includes two digits, and the intelligence monitors under battle conditions Method further includes:
First of the number of the storage address is determined according to the longitude in the multiple geographical location, wherein if X The longitude in geographical location is less than the longitude in the geographical location Y, then the first bit value of X storage address is stored less than Y First bit value of address;
The second of the number of the storage address is determined according to the latitude in the multiple geographical location, wherein if N The latitude value in geographical location is less than the latitude value in the geographical location M, then the second bit value of N storage address is stored less than M Second bit value of address, wherein X, Y, M and N are less than the positive integer of the multiple video sum.
In one embodiment, the intelligent actual combat monitoring method further includes:
Retrieve the first frame for occurring the target video object in each video of the multiple video;
The time order and function that the multiple first frame in more the multiple video occurs;
Generate the bitmap for indicating the direction of motion of the target video object;And
The direction of motion of the target video object is marked on the bitmap according to the time comparison result.
Compared with prior art, the relative position of record source video sequence is gone to by the storage address of memory, and according to depositing Store up address information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user only In the case where only needing basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user In the case where will be deeper into goal in research object, the result of such method provides foundation for more in-depth study.In particular Subsequent video concentration provides foundation, can greatly reduce video concentration operand, improve working, case handling efficiency.In addition, using Latitude and longitude information determine storage address, simplify calculating (such as:Without judging relative direction), save memory space.
Detailed description of the invention
Fig. 1 show the intelligent actual combat system of embodiment according to the present invention.
Fig. 2 show the video storage server of embodiment according to the present invention.
Fig. 3 show the processor of embodiment according to the present invention.
Fig. 4 show the distribution schematic diagram of the video monitoring devices of embodiment according to the present invention.
Fig. 5 show the video frequency abstract retrieval server of embodiment according to the present invention.
Fig. 6 show the schematic diagram of the bitmap of embodiment according to the present invention.
Fig. 7 show the method flow diagram of the tracking target video object of embodiment according to the present invention.
Fig. 8 show the another method flow chart of the tracking target video object of embodiment according to the present invention.
Fig. 9 show the another method flow chart of the tracking target video object of embodiment according to the present invention.
Figure 10 show the another method flow chart of the tracking target video object of embodiment according to the present invention.
Figure 11 show the another method flow chart of the tracking target video object of embodiment according to the present invention.
Figure 12 show video storage server according to another embodiment of the present invention.
Figure 13 show the schematic diagram of the bitmap of embodiment according to the present invention.
Figure 14 show the schematic diagram of the abstract direction bitmap of embodiment according to the present invention.
Figure 15 show the intelligent actual combat monitoring method of embodiment according to the present invention.
Figure 16 show another intelligent actual combat monitoring method of embodiment according to the present invention.
Figure 17 show another intelligent actual combat monitoring method of embodiment according to the present invention.
Figure 18 show another structure of the video frequency abstract retrieval server of embodiment according to the present invention.
Figure 19 show the flow chart of the video concentration method of embodiment according to the present invention.
The flow chart of another video concentration method of embodiment according to the present invention shown in Figure 20.
Specific embodiment
Detailed description will be provided to the embodiment of the present invention below.Although the present invention will combine some specific embodiments It is illustrated and illustrates, but should be noted that the present invention is not merely confined to these embodiments.On the contrary, to the present invention The modification or equivalent replacement of progress, are intended to be within the scope of the claims of the invention.
In addition, in order to better illustrate the present invention, numerous details is given in specific embodiment below. It will be understood by those skilled in the art that without these details, the present invention equally be can be implemented.It is right in other example It is not described in detail in known method, process, element and circuit, in order to highlight purport of the invention.
Fig. 1 show the intelligent actual combat system 100 of embodiment according to the present invention.Intelligent actual combat system 100 includes multiple Video monitoring devices 104,105,106 and 107.Video monitoring devices can be camera, day net monitor or other can shoot with video-corder The monitoring arrangement of video.Although only showing 4 video monitoring devices in the embodiment of Fig. 1, those skilled in the art is answered This knows, within the scope of the present invention may include the video monitoring devices of other numbers.Multiple video monitoring devices 104 to 107 are located at multiple geographical locations, for acquiring multiple videos in these geographical locations.
Intelligent actual combat system 100 further includes client host 102, intelligent actual combat Platform Server 108, video storage service Device 110 and video frequency abstract retrieval server 112.Client host 102 by network and multiple video monitoring devices 104-107 into Row communication, client host 102 receive the multiple video from multiple video monitoring devices 104-107 respectively.In addition, client The also selection target the video object of host 102.
Intelligent actual combat Platform Server 106 is connected with client host 102.Client host 102 uploads multiple videos To intelligent actual combat Platform Server 108.Video storage server 110 is connected with intelligent actual combat Platform Server 108.Video storage Server 110 replicates the multiple video from intelligent actual combat Platform Server 108, and according to the geographical location of the multiple video The multiple video is stored respectively in multiple storage address.
Video frequency abstract retrieval server 112 is connected with video storage server 110 and client host 102.Video frequency abstract Retrieval server 112 judges the traffic direction of the target video object according to the multiple storage address, and according to the mesh The time point and the traffic direction that mark the video object occurs in the multiple video mark the target video pair on map The running track of elephant.Client host 102 shows plucking for the target video object according to the traffic direction on a display screen Traffic direction figure is wanted, and shows the operation for being labeled with the target video object on the display screen according to the running track The map of track.
Fig. 2 show the video storage server 110 of embodiment according to the present invention.In the embodiment of fig. 2, video is deposited Storing up server 110 includes memory 202 and processing module 204.Memory 202 includes multiple storage units.Processing module 204 with Memory 202 is connected.Processing module 204 selectes reference position, calculates between the multiple geographical location and the reference position Multiple distances, more the multiple distance, and according to the multiple storage address of the multiple range estimation.
Fig. 3 show the processor 204 of embodiment according to the present invention.Processor 204 includes number module 302, orientation Judgment module 304 and address determination module 306.Number module 302 adds according to the multiple distance to the multiple video respectively Upper number, wherein the number of the small video of distance value is less than the big video of distance value.Orientation judgment module 304 judges institute respectively State the relative bearing in multiple geographical locations and the reference position.
Fig. 4 show the distribution schematic diagram of the video monitoring devices of embodiment according to the present invention.As shown in figure 4, if Reference position 402 is selected, then is from small to large respectively video at a distance from reference position 402 in video monitoring devices 104 to 107 Monitoring arrangement 104,106,107 and 105.Therefore, number can be formulated to video monitoring devices 104,106,107 and 105 respectively 1,2,3 and 4.
Fig. 3 is returned to, address determination module 306 determines the multiple according to the multiple number and the multiple relative bearing Storage address.In particular, for n-th video monitoring devices, (wherein, N is less than or equal to the sum of the multiple video, BNFor The number module is the number of the N video), when the geographical location N is in the direct north of reference position 402, ground It is (D1+B that N video is stored in address in memory 202 by location determination module 306N) (i.e. second deposits for the storage unit of D2D3D4 Storage address is (D1+BN)D2D3D4).When the geographical location N is in the due south direction of reference position 402, address determination module 306 N video is stored in memory 202 address is D1 (D2+BN) D3D4 storage unit (i.e. N storage address be D1 (D2+BN)D3D4).When the geographical location N is in the due east direction of reference position 402, address determination module 306 regards N It is D1D2 (D3+B that frequency, which is stored in address in memory 202,N) D4 storage unit (i.e. N storage address be D1D2 (D3+BN)D4) When the geographical location N is in the due west direction of reference position 402, N video is stored in memory by address determination module 306 Address is D1D2D3 (D4+B in 202N) storage unit (i.e. N storage address be D1D2D3 (D4+BN))。
In addition, address determination module 306 regards N when the geographical location N is in the northeastward of reference position 402 It is (D1+B that frequency, which is stored in address in memory 202,N)D2(D3+BN) D4 storage unit (i.e. N storage address be (D1+BN)D2 (D3+BN)D4).When the geographical location N is in the southwestward of reference position 402, address determination module 306 is by N video Being stored in address in memory 202 is D1 (D2+BN)D3(D4+BN) storage unit (i.e. N storage address be D1 (D2+BN)D3 (D4+BN));When the geographical location N is in the southeastern direction of reference position 402, address determination module 306 deposits N video Being stored in address in memory 202 is (D1+BN)D2(D3+BN) D4 storage unit (i.e. N storage address be (D1+BN)D2(D3+ BN) D4) when the geographical location N is in the direction northwest of reference position 402, N video is stored in by address determination module 306 Address is (D1+B in memory 202N)D2D3(D4+BN) storage unit (i.e. N storage address be (D1+BN)D2D3(D4+ BN))。
Therefore, as shown in figure 4, the storage address of video monitoring devices 104,105,106 and 107 is respectively (D1+1) D2D3D4, (D1+4) D2D3 (D4+4), D1 (D2+2) (D3+2) D4 and D1D2 (D3+3) D4.
Fig. 5 show the video frequency abstract retrieval server 112 of embodiment according to the present invention.Video frequency abstract retrieval server 112 include retrieval module 502, time comparison module 504 and bitmap labeling module 506.Retrieval module 502 retrieves the multiple view Occurs the first frame of the target video object in each video of frequency.Time comparison module 504 more the multiple first The time order and function that frame occurs.The bitmap of the direction of motion of the bitmap labeling module generation expression target video object, and according to The time comparison result marks the direction of motion of the target video object on the bitmap.
Fig. 6 show the schematic diagram 600 of the bitmap of embodiment according to the present invention.In the embodiment in fig 6, retrieval module 502, which retrieve video monitoring devices 104,105 and 106, the appearance of target video object, and the time that first frame image occurs is first It is afterwards respectively 105,104 and 106.Therefore, according to the storage address of each video and the search result, it can be deduced that in Fig. 6 Bitmap, show the traffic direction of target video object.
The advantage is that going to the relative position of record source video sequence by the storage address of memory, and according to storage address Information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user it is only necessary to In the case where basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user deeper In the case where entering goal in research object, the result of such method provides foundation for more in-depth study.It is in particular subsequent Video concentration provide foundation, can greatly reduce video concentration operand, improve handle affairs, case handling efficiency (will in Figure 18-20 into Row further description).
Fig. 7 show the method flow diagram 700 of the tracking target video object of embodiment according to the present invention.In step In 702, multiple videos in multiple geographical locations are acquired.In step 704, selection target the video object.In step 706, root The multiple video is stored respectively in multiple storage address according to the geographical location of the multiple video.In step 708, according to The multiple storage address judges the traffic direction of the target video object.In step 720, according to the target video pair As the time point and the traffic direction that occur in the multiple video mark the operation of the target video object on map Track.In step 712, the abstract traffic direction figure of the target video object is shown according to the traffic direction.In step In 714, the map for being labeled with the running track of the target video object is shown according to the running track.
Fig. 8 show the another method flow chart 706 of the tracking target video object of embodiment according to the present invention.Fig. 8 It is the further explanation to step 706 in Fig. 7.In step 802, reference position is selected.In step 804, it calculates described more Multiple distances between a geographical location and the reference position.In step 806, more the multiple distance.In step 808 In, according to the multiple storage address of the multiple range estimation.
Fig. 9 show the another method flow chart 808 of the tracking target video object of embodiment according to the present invention.Fig. 9 It is the further explanation to step 808 in Fig. 8.In step 902, added respectively according to the multiple distance to the multiple video Upper number, wherein the number of the small video of distance value is less than the big video of distance value.In step 904, judge respectively described more The relative bearing in a geographical location and the reference position.In step 906, according to the multiple number and the multiple opposite The multiple storage address of direction deciding.
Figure 10 show the another method flow chart 906 of the tracking target video object of embodiment according to the present invention.Figure 10 be the further explanation to step 906 in Fig. 9.
In step 1002, it when the geographical location N is in the direct north of the reference position, then enters step 1003, it is (D1+B that N video, which is stored in address in the memory,N) D2D3D4 storage unit (i.e. the second storage address For (D1+BN) D2D3D4), wherein N is less than or equal to the sum of the multiple video, BNIt is the N view for the number module The number of frequency.Otherwise, 1004 are entered step.
In step 1004, when the geographical location the N is in the due south direction of the reference position, then enter step 1005, it is D1 (D2+B that the N video, which is stored in address in the memory,N) D3D4 storage unit (i.e. N storage ground Location is D1 (D2+BN)D3D4).Otherwise, 1006 are entered step.
In step 1006, it when the geographical location the N is in the due east direction of the reference position, then enters step 1007, it is D1D2 (D3+B that the N video, which is stored in address in the memory,N) D4 storage unit (i.e. N storage ground Location is D1D2 (D3+BN)D4).Otherwise, 1008 are entered step.
In step 1008, when the geographical location the N is in the due west direction of the reference position, then enter step 1009, it is D1D2D3 (D4+B that the N video, which is stored in address in the memory,N) storage unit (i.e. N storage ground Location is D1D2D3 (D4+BN)).Otherwise, 1010 are entered step.
In step 1010, it when the geographical location the N is in the northeastward of the reference position, then enters step 1011, it is (D1+B that the N video, which is stored in address in the memory,N)D2(D3+BN) (i.e. N is deposited for the storage unit of D4 Storage address is (D1+BN)D2(D3+BN)D4).Otherwise, 1012 are entered step.
In step 1012, it when the geographical location the N is in the southwestward of the reference position, then enters step 1013, it is D1 (D2+B that the N video, which is stored in address in the memory,N)D3(D4+BN) storage unit (i.e. N is deposited Storage address is D1 (D2+BN)D3(D4+BN)).Otherwise, 1014 are entered step.
In step 1014, when the geographical location the N is in the southeastern direction of the reference position, then enter step 1015, it is (D1+B that the N video, which is stored in address in the memory,N)D2(D3+BN) (i.e. N is deposited for the storage unit of D4 Storage address is (D1+BN)D2(D3+BN)D4).Otherwise, 1016 are entered step.
In step 1016, it is possible to determine that the geographical location N is in the direction northwest of the reference position, at this point, into Step 1018, the N video is stored in address in the memory is (D1+BN)D2D3(D4+BN) storage unit (i.e. N storage address is (D1+BN)D2D3(D4+BN))。
Figure 11 show the another method flow chart 710 of the tracking target video object of embodiment according to the present invention.Figure 11 be the further explanation to step 710 in Fig. 7.
In step 1102, it retrieves and occurs the first of the target video object in each video of the multiple video Frame.In step 1104, the time order and function that the multiple first frame in more the multiple video occurs.In step 1106 In, the bitmap for indicating the direction of motion of the target video object is generated, and according to the time comparison result in the bitmap The direction of motion of the upper mark target video object.
The advantage is that going to the relative position of record source video sequence by the storage address of memory, and according to storage address Information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user it is only necessary to In the case where basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user deeper In the case where entering goal in research object, the result of such method provides foundation for more in-depth study.It is in particular subsequent Video concentration provide foundation, can greatly reduce video concentration operand, improve handle affairs, case handling efficiency (will in Figure 18-20 into Row further description).
Figure 12 show video storage server 110 ' according to another embodiment of the present invention.Figure 12 label and Fig. 2 phase Same part has the function of similar.Figure 12 is another example structure of the intelligent actual combat system 100 of Fig. 1.
In the fig. 12 embodiment, video storage server includes memory 202 and processing module 1204.Memory 202 Including multiple storage units.Processing module 1204 is connected with memory 202.Processing module 1204 reads the ground of the multiple video Location information is managed, the longitude and latitude in the multiple geographical location are extracted, respectively to the multiple longitude and the multiple latitude It is ranked up, determines that the multiple storage address is numbered according to the ranking results.
More particularly, processing module 1204 includes number module 1206.Number module 1206 is according to the multiple geography The longitude of position determines first of the number of the storage address, wherein if the longitude in X geographical location is less than Y The longitude in geographical location, then first bit value of the first bit value of X storage address less than Y storage address;The volume Number module determines the second of the number of the storage address also according to the latitude in the multiple geographical location, wherein if N The latitude value in geographical location is less than the latitude value in the geographical location M, then the second bit value of N storage address is stored less than M Second bit value of address, wherein X, Y, M and N are less than the positive integer of the multiple video sum.
Thus, it is possible to find out, in the fig. 12 embodiment, number only two.It, then can be in conjunction with the embodiment of Fig. 4 Finding out the sequence of longitude from small to large is:Video monitoring devices 105,104,106 and 107.The sequence of latitude value from small to large It is:Video monitoring devices 106,107,104 and 105.The number difference of video monitoring devices 104,105,106 and 107 as a result, It is:23,14,31 and 42.Therefore, as shown in the bitmap schematic diagram of Figure 13, bitmap 1300 can be obtained from address information.It utilizes Embodiment above-mentioned can obtain 1400 abstract direction bitmap.
The advantage is that going to the relative position of record source video sequence by the storage address of memory, and according to storage address Information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user it is only necessary to In the case where basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user deeper In the case where entering goal in research object, the result of such method provides foundation for more in-depth study.It is in particular subsequent Video concentration provide foundation, can greatly reduce video concentration operand, improve handle affairs, case handling efficiency (will in Figure 18-20 into Row further description).In addition, using latitude and longitude information determine storage address, simplify calculating (such as:Without judging phase To direction), save memory space.
Figure 15 show the intelligent actual combat monitoring method 1500 of embodiment according to the present invention.In step 1502, acquisition Multiple videos in multiple geographical locations.In step 1504, selection target the video object.In step 1506, read described more The geographical location information of a video.In step 1508, the longitude and latitude in the multiple geographical location are extracted.In step 1510 In, the multiple longitude and the multiple latitude are ranked up respectively, determine multiple storage address according to the ranking results Number.In step 1512, according to the geographical location of the multiple video by the multiple video be stored respectively in it is described more A storage address numbers corresponding multiple storage address.In step 1514, the mesh is judged according to the multiple storage address Mark the traffic direction of the video object.In step 1516, according to the target video object when the multiple video occurs Between point and the traffic direction running track of the target video object is marked on map.In step 1518, according to institute The abstract traffic direction figure that traffic direction shows the target video object is stated, and according to running track display mark State the map of the running track of target video object.
Figure 16 show another intelligent actual combat monitoring method 1512 of embodiment according to the present invention.Figure 16 is in Figure 15 Step 1512 further explanation.In step 1602, the storage ground is determined according to the longitude in the multiple geographical location First of the number of location, wherein if the longitude in X geographical location, less than the longitude in the geographical location Y, X is deposited Store up first bit value of first bit value less than Y storage address of address.In step 1604, according to the multiple geographical position The latitude set determines the second of the number of the storage address, wherein if the latitude value in the geographical location N is less than M The latitude value of position is managed, then second bit value of the second bit value of N storage address less than M storage address, wherein X, Y, M and N is less than the positive integer of the multiple video sum.
Figure 17 show another intelligent actual combat monitoring method 1512 of embodiment according to the present invention.Figure 17 is in Figure 15 Step 1514 further explanation.In step 1702, retrieves in each video of the multiple video and the mesh occur Mark the first frame of the video object.In step 1704, the time of the multiple first frame appearance in more the multiple video Successively.In step 1706, the bitmap for indicating the direction of motion of the target video object is generated, and compare according to the time As a result the direction of motion of the target video object is marked on the bitmap.
The advantage is that going to the relative position of record source video sequence by the storage address of memory, and according to storage address Information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user it is only necessary to In the case where basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user deeper In the case where entering goal in research object, the result of such method provides foundation for more in-depth study.It is in particular subsequent Video concentration provide foundation, can greatly reduce video concentration operand, improve handle affairs, case handling efficiency (will in Figure 18-20 into Row further description).In addition, using latitude and longitude information determine storage address, simplify calculating (such as:Without judging phase To direction), save memory space.
Figure 18 show another structure chart of the video frequency abstract retrieval server 112 of embodiment according to the present invention 112'.Figure 18 label element identical with Fig. 4 has similar function.In the embodiment of figure 18, video frequency abstract retrieval service Device 112 includes that module 1802 is concentrated in video.It just because of this, include the intelligent actual combat system structure of video frequency abstract retrieval server 112 ' At video concentration systems.Other parts in this video concentration systems, in addition to video frequency abstract retrieval server 112 ' The dependency structure of Fig. 1 to Figure 17 can be used with structure.
Video is concentrated module 1802 and selects the view comprising the target video object in the multiple video according to bitmap Frequently;The video comprising the target video object of preset quantity is acquired from each video of the video selected Frame, to generate multiple video frame groups;Splice the multiple video frame group according to the traffic direction that the bitmap is shown, with shape At concentration video.As Fig. 6 or Figure 14 embodiment in, video concentration module 1802 can directly exclude video monitoring devices 107, the time of video concentration is saved as a result, is improved the efficiency of video concentration, has been further speeded up user's case handling efficiency.
In one embodiment, video concentration module 1802 further includes acquisition module 1804.Acquisition module 1804 is from described Target video object starts to acquire N number of video frame backward in the first frame that each video occurs, and from the target video pair As the last frame occurred in each video starts to acquire M video frame forward, wherein M and N is positive integer.In a reality It applies in example, the value of the M and N are proportional to the duration that the target video object occurs in each video.The advantage is that It is proportional to the duration that target video object occurs in each video by setting M and N, it is possible to reduce redundant image is adopted Collection saves the time of video concentration, improves video thickening efficiency.
Figure 19 show the flow chart of the video concentration method 1900 of embodiment according to the present invention.In step 1902, Acquire multiple videos in multiple geographical locations.In step 1904, selection target the video object.In step 1906, according to institute The multiple video is stored respectively in multiple storage address by the geographical location for stating multiple videos.In step 1908, according to institute The traffic direction that multiple storage address judge the target video object is stated, and generates the bitmap for being labeled with the traffic direction. In step 1910, the video comprising the target video object is selected in the multiple video according to the bitmap.In step In rapid 1912, preset quantity is acquired from each video of the video selected includes the target video object Video frame, to generate multiple video frame groups.In step 1914, according to the traffic direction splicing that the bitmap is shown Multiple video frame groups, to form concentration video.Wherein, step 1906 can use the side of Fig. 8 to Figure 10 or Figure 16 to Figure 17 Method process.
The flow chart of another video concentration method 1912 of embodiment according to the present invention shown in Figure 20.Figure 20 is to Figure 19 In step 1912 further describe.In step 2002, occur from the target video object in each video One frame starts to acquire N number of video frame backward.In step 2004, occur most from the target video object in each video A later frame starts to acquire M video frame forward, wherein M and N is positive integer.In one embodiment, the value of the M and N and institute It is proportional to state the duration that target video object occurs in each video.
The advantage is that going to the relative position of record source video sequence by the storage address of memory, and according to storage address Information generate direction bitmap, can effectively intuitive displaying target the video object basic running track, user it is only necessary to In the case where basic orientation information, such method can be saved a large amount of time, improve user's work efficiency.It is needed in user deeper In the case where entering goal in research object, the result of such method provides foundation for more in-depth study.It is in particular subsequent Video concentration provides foundation, can greatly reduce video concentration operand, improve working, case handling efficiency.In addition, passing through setting M It is proportional to the duration that target video object occurs in each video with N, it is possible to reduce view is saved in the acquisition of redundant image The time of frequency concentration, improve video thickening efficiency.
Embodiment and attached drawing are only the common embodiment of the present invention specifically above.Obviously, claims are not being departed from Can there are various supplements, modification and replacement under the premise of the spirit of that invention and invention scope that are defined.Those skilled in the art It should be understood that the present invention in practical applications can be according to specific environment and job requirement under the premise of without departing substantially from invention criterion It is varied in form, structure, layout, ratio, material, element, component and other aspects.Therefore, the embodiment being disclosed herein It is merely to illustrate rather than limits, the range of the present invention is defined by appended claim and its legal equivalents, and is not limited to before this Description.

Claims (1)

1. a kind of intelligence actual combat monitoring method, which is characterized in that the intelligence actual combat monitoring method includes the following steps:
Step 1:Acquire multiple videos in multiple geographical locations;
Step 2:Selection target the video object;
Step 3:Read the geographical location information of the multiple video;
Step 4:Take the longitude and latitude in the multiple geographical location;
Step 5:The multiple longitude and the multiple latitude are ranked up respectively, determine multiple deposit according to the ranking results Store up address number;
Specially:
First of the number of the storage address is determined according to the longitude in the multiple geographical location, wherein
If the longitude in X geographical location is less than the longitude in the geographical location Y, the first bit value of X storage address Less than the first bit value of Y storage address;
The second of the number of the storage address is determined according to the latitude in the multiple geographical location, wherein if N is geographical The latitude value of position is less than the latitude value in the geographical location M, then the second bit value of N storage address is less than M storage address The second bit value, wherein X, Y, M and N are less than the positive integer of the multiple video sum;
Step 6:The multiple video is stored respectively in and the multiple storage ground according to the geographical location of the multiple video Number corresponding multiple storage address in location;
Step 7:The traffic direction of the target video object is judged according to the multiple storage address;
Step 8:The time point occurred according to the target video object in the multiple video and the traffic direction are in map State the running track of target video object in subscript residence;
Specially:
Retrieve the first frame for occurring the target video object in each video of the multiple video;
The time order and function that the multiple first frame in more the multiple video occurs;
Generate the bitmap for indicating the direction of motion of the target video object;And
The direction of motion of the target video object is marked on the bitmap according to the time comparison result;
Step 9:The abstract traffic direction figure of the target video object is shown according to the traffic direction, and according to the operation Track shows the map for being labeled with the running track of the target video object.
CN201410720917.9A 2014-12-02 2014-12-02 A kind of intelligence actual combat monitoring method Active CN105227902B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410720917.9A CN105227902B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410720917.9A CN105227902B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat monitoring method

Publications (2)

Publication Number Publication Date
CN105227902A CN105227902A (en) 2016-01-06
CN105227902B true CN105227902B (en) 2018-11-23

Family

ID=54996546

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410720917.9A Active CN105227902B (en) 2014-12-02 2014-12-02 A kind of intelligence actual combat monitoring method

Country Status (1)

Country Link
CN (1) CN105227902B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306880A (en) * 2015-03-17 2016-02-03 四川浩特通信有限公司 Video concentration method
CN106504270B (en) * 2016-11-08 2019-12-20 浙江大华技术股份有限公司 Method and device for displaying target object in video

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8229163B2 (en) * 2007-08-22 2012-07-24 American Gnc Corporation 4D GIS based virtual reality for moving target prediction
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8229163B2 (en) * 2007-08-22 2012-07-24 American Gnc Corporation 4D GIS based virtual reality for moving target prediction
CN103942811A (en) * 2013-01-21 2014-07-23 中国电信股份有限公司 Method and system for determining motion trajectory of characteristic object in distributed and parallel mode

Also Published As

Publication number Publication date
CN105227902A (en) 2016-01-06

Similar Documents

Publication Publication Date Title
US20170185823A1 (en) Apparatus And Method For Image-Based Positioning, Orientation And Situational Awareness
US10592769B2 (en) Searching for images by video
CN102884400B (en) Messaging device, information processing system and program
US10146794B2 (en) System and method for spatial clustering using multiple-resolution grids
Han et al. Real-time global registration for globally consistent rgb-d slam
US20070173956A1 (en) System and method for presenting geo-located objects
Wang et al. A spatial-adaptive sampling procedure for online monitoring of big data streams
CN103679730A (en) Video abstract generating method based on GIS
CN104991924A (en) Method and apparatus for determining address of new supply point
US20160299910A1 (en) Method and system for querying and visualizing satellite data
CN105282496B (en) A kind of method for tracking target video object
CN106708896A (en) ECharts map displaying method and device
CN105227902B (en) A kind of intelligence actual combat monitoring method
Bürki et al. Appearance‐based landmark selection for visual localization
CN104486585A (en) Method and system for managing urban mass surveillance video based on GIS
CN109145225B (en) Data processing method and device
CN105721825B (en) A kind of intelligence actual combat system
CN110378059A (en) A kind of village reutilization planning system
CN105323548B (en) A kind of intelligence actual combat system
CN105323547B (en) A kind of video concentration systems
CN105306880A (en) Video concentration method
Li et al. VisioMap: Lightweight 3-D scene reconstruction toward natural indoor localization
CN113298871B (en) Map generation method, positioning method, system thereof, and computer-readable storage medium
CN105721826B (en) A kind of intelligence actual combat system
CN109325977A (en) The optimal image selection method in target area and system, storage medium, electronic equipment

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant