CN105282496A - Method for tracking target video object - Google Patents

Method for tracking target video object Download PDF

Info

Publication number
CN105282496A
CN105282496A CN201410720548.3A CN201410720548A CN105282496A CN 105282496 A CN105282496 A CN 105282496A CN 201410720548 A CN201410720548 A CN 201410720548A CN 105282496 A CN105282496 A CN 105282496A
Authority
CN
China
Prior art keywords
video
memory
memory address
address
geographical position
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201410720548.3A
Other languages
Chinese (zh)
Other versions
CN105282496B (en
Inventor
胡晓芳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Original Assignee
SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd filed Critical SICHUAN HAOTEL TELECOMMUNICATIONS CO Ltd
Priority to CN201410720548.3A priority Critical patent/CN105282496B/en
Publication of CN105282496A publication Critical patent/CN105282496A/en
Application granted granted Critical
Publication of CN105282496B publication Critical patent/CN105282496B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The invention discloses a method for tracking a target video object. The method includes the steps of: collecting a plurality of videos of a plurality of geographic positions; selecting a target video object; storing the plurality of videos in a plurality of memory addresses according to the geographic positions of the plurality of videos; judging the moving direction of the target video object according to the plurality of memory addresses; labeling a moving track of the target video object on a map according to time points at which the target video object appears in the plurality of videos and the moving direction; displaying an abstract moving direction diagram of the target video object according to the moving direction; and displaying the map where the moving trajectory of the target video object is labeled according to the moving trajectory.

Description

A kind of method of tracking target object video
Technical field
The present invention relates to the communications field, be specifically related to a kind of method of tracking target object video.
Background technology
Intelligence under battle conditions platform is based on video image intelligent analysis technology and video image process intellectualized algorithm technology, close laminating public security video investigation business, for case video analysis provides the application platform system of a set of " systematization, networking, intellectuality ".Existing intelligence under battle conditions platform can transfer various places video when public security department settles a case, in time for public security officer's reference.But camera distribution in various places is wide, quantity is many, and public security officer usually requires a great deal of time and searches destination object.Thus, the work efficiency of public security case is reduced.
Summary of the invention
The technical problem to be solved in the present invention is a kind of method providing tracking target object video, to improve Video processing efficiency, improves user's case handling efficiency.
For solving the problems of the technologies described above, the present invention adopts following technical scheme:
The invention provides a kind of method of tracking target object video, it is characterized in that, the method for described tracking target object video comprises the following steps:
Gather multiple videos in multiple geographical position;
Select target object video;
Described multiple video is stored in multiple memory address by geographical position according to described multiple video respectively;
The traffic direction of described target video object is judged according to described multiple memory address;
The time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map;
The summary traffic direction figure of described target video object is shown according to described traffic direction; And
The map of the running orbit of described target video object is had according to described running orbit display mark.
In one embodiment, described the step that described multiple video is stored in multiple memory address respectively also to be comprised:
Selected reference position;
Calculate the multiple distances between described multiple geographical position and described reference position;
More described multiple distance; And
Multiple memory address according to described multiple range estimation.
In one embodiment, described the step that described multiple video is stored in multiple memory address respectively also to be comprised:
Add numbering to described multiple video according to described multiple distance respectively, wherein, the numbering of the video that distance value is little is less than the large video of distance value;
Judge the relative bearing of described multiple geographical position and described reference position respectively; And
Described multiple memory address is judged according to described multiple numbering and described multiple relative bearing.
In one embodiment, the step that described multiple video is stored in multiple memory address respectively is also comprised:
When N geographical position is in the direct north of described reference position, be (D1+B by N video storage address in described memory n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4), wherein, N is less than or equal to the sum of described multiple video, B nfor the numbering that described numbering module is described N video;
The Due South being in described reference position when described N geographical position to time, be D1 (D2+B by described N video storage address in described memory n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4);
When described N geographical position is in the direction, due east of described reference position, be D1D2 (D3+B by described N video storage address in described memory n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4) and
The positive west being in described reference position when described N geographical position to time, be D1D2D3 (D4+B by described N video storage address in described memory n) memory cell (namely N memory address is D1D2D3 (D4+B n)).
In one embodiment, the step that described multiple video is stored in multiple memory address respectively is also comprised:
When described N geographical position is in the northeastward of described reference position, be (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4);
When described N geographical position is in the southwestward of described reference position, be D1 (D2+B by described N video storage address in described memory n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n));
When described N geographical position is in the southeastern direction of described reference position, be (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4); And
The northwest being in described reference position when described N geographical position to time, be (D1+B by described N video storage address in described memory n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
In one embodiment, the step of the summary traffic direction figure of the described target video object of described display comprises:
Retrieve the first frame occurring described target video object in each video of described multiple video;
The time order and function that described multiple first frames in more described multiple video occur; And
Produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Compared with prior art, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency.
Accompanying drawing explanation
Figure 1 shows that intelligence system under battle conditions according to an embodiment of the invention.
Figure 2 shows that video storage server according to an embodiment of the invention.
Figure 3 shows that processor according to an embodiment of the invention.
Figure 4 shows that the distribution schematic diagram of video monitoring devices according to an embodiment of the invention.
Figure 5 shows that video frequency abstract retrieval server according to an embodiment of the invention.
Figure 6 shows that the schematic diagram of bitmap according to an embodiment of the invention.
Figure 7 shows that the method flow diagram of tracking target object video according to an embodiment of the invention.
Figure 8 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 9 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 10 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 11 shows that the other method flow chart of tracking target object video according to an embodiment of the invention.
Figure 12 shows that video storage server according to another embodiment of the present invention.
Figure 13 shows that the schematic diagram of bitmap according to an embodiment of the invention.
Figure 14 shows that the schematic diagram of direction bitmap of making a summary according to an embodiment of the invention.
Figure 15 shows that intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 16 shows that another intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 17 shows that another intelligence method for supervising under battle conditions according to an embodiment of the invention.
Figure 18 shows that the another kind of structure of video frequency abstract retrieval server according to an embodiment of the invention.
Figure 19 shows that the flow chart of video concentration method according to an embodiment of the invention.
The flow chart of another video concentration method according to an embodiment of the invention shown in Figure 20.
Embodiment
Below will provide detailed description to embodiments of the invention.Although the present invention will carry out setting forth and illustrating in conjunction with some embodiments, it should be noted that the present invention is not merely confined to these execution modes.On the contrary, the amendment carry out the present invention or equivalent replacement, all should be encompassed in the middle of right of the present invention.
In addition, in order to better the present invention is described, in embodiment hereafter, give numerous details.It will be understood by those skilled in the art that do not have these details, the present invention can implement equally.In other example, known method, flow process, element and circuit are not described in detail, so that highlight purport of the present invention.
Figure 1 shows that intelligence system 100 under battle conditions according to an embodiment of the invention.Intelligence under battle conditions system 100 comprises multiple video monitoring devices 104,105,106 and 107.Video monitoring devices can be camera, sky net monitor or other can shoot with video-corder the monitoring arrangement of video.Although only show 4 video monitoring devices in the embodiment of Fig. 1, those skilled in the art will appreciate that the video monitoring devices that can comprise other numbers in category of the present invention.Multiple video monitoring devices 104 to 107 lays respectively at multiple geographical position, for gathering multiple videos in these geographical position.
Intelligence under battle conditions system 100 also comprises client host 102, intelligence Platform Server 108, video storage server 110 and video frequency abstract retrieval server 112 under battle conditions.Client host 102 is communicated with multiple video monitoring devices 104-107 by network, and client host 102 receives described multiple video from multiple video monitoring devices 104-107 respectively.In addition, client host 102 goes back select target object video.
Intelligence under battle conditions Platform Server 106 is connected with client host 102.Multiple video is uploaded to intelligence Platform Server 108 under battle conditions by client host 102.Video storage server 110 and intelligence under battle conditions Platform Server 108 are connected.Video storage server 110 copies described multiple video from intelligence actual combat Platform Server 108, and according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively.
Video frequency abstract retrieval server 112 is connected with client host 102 with video storage server 110.Video frequency abstract retrieval server 112 judges the traffic direction of described target video object according to described multiple memory address, and the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.Client host 102 shows the summary traffic direction figure of described target video object on a display screen according to described traffic direction, and has the map of the running orbit of described target video object at described display screen display mark according to described running orbit.
Figure 2 shows that video storage server 110 according to an embodiment of the invention.In the embodiment of fig. 2, video storage server 110 comprises memory 202 and processing module 204.Memory 202 comprises multiple memory cell.Processing module 204 is connected with memory 202.Processing module 204 selectes reference position, calculates the multiple distances between described multiple geographical position and described reference position, more described multiple distance, and according to described multiple range estimation multiple memory address.
Figure 3 shows that processor 204 according to an embodiment of the invention.Processor 204 comprises numbering module 302, orientation judge module 304 and address determination module 306.Numbering module 302 adds numbering to described multiple video according to described multiple distance respectively, and wherein, the numbering of the video that distance value is little is less than the large video of distance value.Orientation judge module 304 judges the relative bearing of described multiple geographical position and described reference position respectively.
Figure 4 shows that the distribution schematic diagram of video monitoring devices according to an embodiment of the invention.As shown in Figure 4, if select reference position 402, then video monitoring devices 104,106,107 and 105 is respectively from small to large with the distance of reference position 402 in video monitoring devices 104 to 107.Therefore, can numbering 1,2,3 and 4 be formulated to respectively video monitoring devices 104,106,107 and 105.
Get back to Fig. 3, address determination module 306 judges described multiple memory address according to described multiple numbering and described multiple relative bearing.Specifically, for N number of video monitoring devices, (wherein, N is less than or equal to the sum of described multiple video, B nnumbering for described numbering module is described N video), when N geographical position is in the direct north of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4).The Due South being in reference position 402 when N geographical position to time, N video storage address in memory 202 is D1 (D2+B by address determination module 306 n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4).When N geographical position is in the direction, due east of reference position 402, N video storage address in memory 202 is D1D2 (D3+B by address determination module 306 n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4) and be in reference position 402 when N geographical position positive west to time, N video storage address in memory 202 is D1D2D3 (D4+B by address determination module 306 n) memory cell (namely N memory address is D1D2D3 (D4+B n)).
In addition, when N geographical position is in the northeastward of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).When N geographical position is in the southwestward of reference position 402, N video storage address in memory 202 is D1 (D2+B by address determination module 306 n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n)); When N geographical position is in the southeastern direction of reference position 402, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4) and be in reference position 402 when N geographical position northwest to time, N video storage address in memory 202 is (D1+B by address determination module 306 n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
Therefore, as shown in Figure 4, the memory address of video monitoring devices 104,105,106 and 107 is respectively (D1+1) D2D3D4, (D1+4) D2D3 (D4+4), D1 (D2+2) (D3+2) D4 and D1D2 (D3+3) D4.
Figure 5 shows that video frequency abstract retrieval server 112 according to an embodiment of the invention.Video frequency abstract retrieval server 112 comprises retrieval module 502, time comparison module 504 and bitmap labeling module 506.Retrieval module 502 retrieves the first frame occurring described target video object in each video of described multiple video.The time order and function that more described multiple first frame of time comparison module 504 occurs.Bitmap labeling module produces the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Figure 6 shows that the schematic diagram 600 of bitmap according to an embodiment of the invention.In the embodiment in fig 6, retrieval module 502 retrieves video monitoring devices 104,105 and 106 has target video object to occur, and the time order and function that the first two field picture occurs is respectively 105,104 and 106.Therefore, according to memory address and the described result for retrieval of each video, the bitmap in Fig. 6 can be drawn, demonstrate the traffic direction of target video object.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).
Figure 7 shows that the method flow diagram 700 of tracking target object video according to an embodiment of the invention.In a step 702, multiple videos in multiple geographical position are gathered.In step 704, select target object video.In step 706, according to the geographical position of described multiple video, described multiple video is stored in multiple memory address respectively.In step 708, the traffic direction of described target video object is judged according to described multiple memory address.In step 720, the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.In step 712, the summary traffic direction figure of described target video object is shown according to described traffic direction.In step 714, there is the map of the running orbit of described target video object according to described running orbit display mark.
Figure 8 shows that the other method flow chart 706 of tracking target object video according to an embodiment of the invention.Fig. 8 is further illustrating step 706 in Fig. 7.In step 802, selected reference position.In step 804, the multiple distances between described multiple geographical position and described reference position are calculated.In step 806, more described multiple distance.In step 808, multiple memory address according to described multiple range estimation.
Figure 9 shows that the other method flow chart 808 of tracking target object video according to an embodiment of the invention.Fig. 9 is further illustrating step 808 in Fig. 8.In step 902, add numbering to described multiple video according to described multiple distance respectively, wherein, the numbering of the video that distance value is little is less than the large video of distance value.In step 904, the relative bearing of described multiple geographical position and described reference position is judged respectively.In step 906, judge described multiple memory address according to described multiple numbering and described multiple relative bearing.
Figure 10 shows that the other method flow chart 906 of tracking target object video according to an embodiment of the invention.Figure 10 is further illustrating step 906 in Fig. 9.
In step 1002, when N geographical position is in the direct north of described reference position, then entering step 1003, is (D1+B by N video storage address in described memory n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4), wherein, N is less than or equal to the sum of described multiple video, B nfor the numbering that described numbering module is described N video.Otherwise, enter step 1004.
In step 1004, the Due South being in described reference position when described N geographical position to time, then entering step 1005, is D1 (D2+B by described N video storage address in described memory n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4).Otherwise, enter step 1006.
In step 1006, when described N geographical position is in the direction, due east of described reference position, then entering step 1007, is D1D2 (D3+B by described N video storage address in described memory n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4).Otherwise, enter step 1008.
In step 1008, the positive west being in described reference position when described N geographical position to time, then entering step 1009, is D1D2D3 (D4+B by described N video storage address in described memory n) memory cell (namely N memory address is D1D2D3 (D4+B n)).Otherwise, enter step 1010.
In step 1010, when described N geographical position is in the northeastward of described reference position, then entering step 1011, is (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).Otherwise, enter step 1012.
In step 1012, when described N geographical position is in the southwestward of described reference position, then entering step 1013, is D1 (D2+B by described N video storage address in described memory n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n)).Otherwise, enter step 1014.
In step 1014, when described N geographical position is in the southeastern direction of described reference position, then entering step 1015, is (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4).Otherwise, enter step 1016.
In step 1016, can judge northwest that N geographical position is in described reference position to, now, entering step 1018, is (D1+B by described N video storage address in described memory n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
Figure 11 shows that the other method flow chart 710 of tracking target object video according to an embodiment of the invention.Figure 11 is further illustrating step 710 in Fig. 7.
In step 1102, the first frame occurring described target video object in each video of described multiple video is retrieved.In step 1104, the time order and function that described multiple first frames in more described multiple video occur.In a step 1106, produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).
Figure 12 shows that video storage server 110 ' according to another embodiment of the present invention.The part that Figure 12 label is identical with Fig. 2 has similar function.Figure 12 is the another kind of example structure of the intelligence actual combat system 100 of Fig. 1.
In the fig. 12 embodiment, video storage server comprises memory 202 and processing module 1204.Memory 202 comprises multiple memory cell.Processing module 1204 is connected with memory 202.Processing module 1204 reads the geographical location information of described multiple video, extracts longitude and the latitude in described multiple geographical position, sorts respectively to described multiple longitude and described multiple latitude, judges described multiple memory address numbering according to described ranking results.
More particularly, processing module 1204 comprises numbering module 1206.Numbering module 1206 determines first of the numbering of described memory address according to the longitude in described multiple geographical position, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address; Described numbering module also determines the second of the numbering of described memory address according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
Thus, can find out, in the fig. 12 embodiment, its numbering only has two.The embodiment of composition graphs 4, then can find out that longitude sequence is from small to large: video monitoring devices 105,104,106 and 107.Latitude value sequence is from small to large: video monitoring devices 106,107,104 and 105.Thus, the numbering of video monitoring devices 104,105,106 and 107 respectively: 23,14,31 and 42.Therefore, as shown in the bitmap schematic diagram of Figure 13, bitmap 1300 can be obtained from address information.Utilize aforesaid embodiment, the summary direction bitmap of 1400 can be drawn.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).In addition, adopt latitude, longitude information to determine memory address, simplifying calculating (such as: without the need to judging relative direction), saving memory space.
Figure 15 shows that intelligence method for supervising 1500 under battle conditions according to an embodiment of the invention.In step 1502, gather multiple videos in multiple geographical position.In step 1504, select target object video.In step 1506, read the geographical location information of described multiple video.In step 1508, extract longitude and the latitude in described multiple geographical position.In step 1510, respectively described multiple longitude and described multiple latitude are sorted, judge that multiple memory address is numbered according to described ranking results.In step 1512, according to the geographical position of described multiple video described multiple video is stored in respectively and numbers corresponding multiple memory addresss with described multiple memory address.In step 1514, judge the traffic direction of described target video object according to described multiple memory address.In step 1516, the time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map.In step 1518, show the summary traffic direction figure of described target video object according to described traffic direction, and have the map of the running orbit of described target video object according to described running orbit display mark.
Figure 16 shows that another intelligence method for supervising 1512 under battle conditions according to an embodiment of the invention.Figure 16 is further illustrating the step 1512 in Figure 15.In step 1602, determine the numbering of described memory address according to the longitude in described multiple geographical position first, wherein, if the longitude in X geographical position is less than the longitude in Y geographical position, then the first bit value of X memory address is less than the first bit value of Y memory address.In step 1604, the second of the numbering of described memory address is determined according to the latitude in described multiple geographical position, wherein, if the latitude value in N geographical position is less than the latitude value in M geographical position, then the second numerical value of N memory address is less than the second numerical value of M memory address, wherein, X, Y, M and N are the positive integer being less than described multiple video sum.
Figure 17 shows that another intelligence method for supervising 1512 under battle conditions according to an embodiment of the invention.Figure 17 is further illustrating the step 1514 in Figure 15.In step 1702, retrieve the first frame occurring described target video object in each video of described multiple video.In step 1704, the time order and function that described multiple first frames in more described multiple video occur.In step 1706, produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency (will conduct further description in Figure 18-20).In addition, adopt latitude, longitude information to determine memory address, simplifying calculating (such as: without the need to judging relative direction), saving memory space.
Figure 18 shows that another kind of structure Figure 112 ' of video frequency abstract retrieval server 112 according to an embodiment of the invention.The element that Figure 18 label is identical with Fig. 4 has similar function.In the embodiment of Figure 18, video frequency abstract retrieval server 112 comprises video and concentrates module 1802.Just because of this, the intelligence actual combat System's composition of video frequency abstract retrieval server 112 ' video concentration systems is comprised.In this video concentration systems, other portions and the structure except video frequency abstract retrieval server 112 ' all can adopt the dependency structure of Fig. 1 to Figure 17.
Video concentrates module 1802 selects to comprise described target video object in described multiple video video according to bitmap; The frame of video comprising described target video object of predetermined number is gathered each video of video of selecting, to produce multiple frame of video group from described; According to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.As in the embodiment of Fig. 6 or Figure 14, video concentrates module 1802 can directly get rid of video monitoring devices 107, thus, saves the time that video is concentrated, improves the efficiency that video is concentrated, accelerate user's case handling efficiency further.
In one embodiment, video concentrates module 1802 and also comprises acquisition module 1804.N number of frame of video is gathered backward the first frame that acquisition module 1804 occurs from described target video object at each video, and gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.In one embodiment, the duration that occurs in each video of the value of described M and N and described target video object is proportional.Advantage is, proportional by arranging the duration that M and N and target video object occur in each video, can reduce the collection of redundant image, save the time that video is concentrated, improve video thickening efficiency.
Figure 19 shows that the flow chart of video concentration method 1900 according to an embodiment of the invention.In step 1902, gather multiple videos in multiple geographical position.In step 1904, select target object video.In step 1906, described multiple video is stored in multiple memory address by geographical position according to described multiple video respectively.In step 1908, judge the traffic direction of described target video object according to described multiple memory address, and produce the bitmap that mark has described traffic direction.In step 1910, in described multiple video, select according to described bitmap the video comprising described target video object.In step 1912, gathered the frame of video comprising described target video object of predetermined number each video of video of selecting, to produce multiple frame of video group from described.In step 1914, according to the described multiple frame of video group of described traffic direction splicing that described bitmap shows, to form concentrated video.Wherein, step 1906 can adopt the method flow of Fig. 8 to Figure 10 or Figure 16 to Figure 17.
The flow chart of another video concentration method 1912 according to an embodiment of the invention shown in Figure 20.Figure 20 is further describing the step 1912 in Figure 19.In step 2002, the first frame occurred at each video from described target video object, gather N number of frame of video backward.In step 2004, gather forward M frame of video the last frame occurred at each video from described target video object, wherein, M and N is positive integer.In one embodiment, the duration that occurs in each video of the value of described M and N and described target video object is proportional.
Advantage is, by the relative position that the memory address of memory goes recording of video to originate, and produce direction bitmap according to storage address information, can the basic running orbit of display-object object video effectively intuitively, when user only needs basic orientation information, this kind of method can save a large amount of time, improves user's work efficiency.When user needs more to further investigate destination object, the result of this kind of method is that more deep research provides foundation.Especially provide foundation for follow-up video is concentrated, can greatly reduce video concentrates operand, improves working, case handling efficiency.In addition, proportional by arranging the duration that M and N and target video object occur in each video, the collection of redundant image can be reduced, save the time that video is concentrated, improve video thickening efficiency.
Embodiment and accompanying drawing are only the conventional embodiment of the present invention above.Obviously, various supplement, amendment and replacement can be had under the prerequisite not departing from the present invention's spirit that claims define and invention scope.It should be appreciated by those skilled in the art that the present invention can change in form, structure, layout, ratio, material, element, assembly and other side under the prerequisite not deviating from invention criterion according to concrete environment and job requirement in actual applications to some extent.Therefore, be only illustrative rather than definitive thereof in the embodiment of this disclosure, the scope of the present invention is defined by appended claim and legal equivalents thereof, and is not limited thereto front description.

Claims (6)

1. a method for tracking target object video, is characterized in that, the method for described tracking target object video comprises the following steps:
Gather multiple videos in multiple geographical position;
Select target object video;
Described multiple video is stored in multiple memory address by geographical position according to described multiple video respectively;
The traffic direction of described target video object is judged according to described multiple memory address;
The time point occurred at described multiple video according to described target video object and described traffic direction mark the running orbit of described target video object on map;
The summary traffic direction figure of described target video object is shown according to described traffic direction; And
The map of the running orbit of described target video object is had according to described running orbit display mark.
2. the method for tracking target object video according to claim 1, is characterized in that, describedly the step that described multiple video is stored in multiple memory address is respectively also comprised:
Selected reference position;
Calculate the multiple distances between described multiple geographical position and described reference position;
More described multiple distance; And
Multiple memory address according to described multiple range estimation.
3. the method for tracking target object video according to claim 2, is characterized in that, describedly the step that described multiple video is stored in multiple memory address is respectively also comprised:
Add numbering to described multiple video according to described multiple distance respectively, wherein, the numbering of the video that distance value is little is less than the large video of distance value;
Judge the relative bearing of described multiple geographical position and described reference position respectively; And
Described multiple memory address is judged according to described multiple numbering and described multiple relative bearing.
4. the method for tracking target object video according to claim 3, is characterized in that, the step that described multiple video is stored in multiple memory address respectively also comprised:
When N geographical position is in the direct north of described reference position, be (D1+B by N video storage address in described memory n) (namely the second memory address is (D1+B for the memory cell of D2D3D4 n) D2D3D4), wherein, N is less than or equal to the sum of described multiple video, the numbering of BN to be described numbering module be described N video;
The Due South being in described reference position when described N geographical position to time, be D1 (D2+B by described N video storage address in described memory n) (namely N memory address is D1 (D2+B for the memory cell of D3D4 n) D3D4);
When described N geographical position is in the direction, due east of described reference position, be D1D2 (D3+B by described N video storage address in described memory n) (namely N memory address is D1D2 (D3+B for the memory cell of D4 n) D4); And
The positive west being in described reference position when described N geographical position to time, be D1D2D3 (D4+B by described N video storage address in described memory n) memory cell (namely N memory address is D1D2D3 (D4+B n)).
5. the method for tracking target object video according to claim 4, is characterized in that, the step that described multiple video is stored in multiple memory address respectively also comprised:
When described N geographical position is in the northeastward of described reference position, be (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4);
When described N geographical position is in the southwestward of described reference position, be D1 (D2+B by described N video storage address in described memory n) D3 (D4+B n) memory cell (namely N memory address is D1 (D2+B n) D3 (D4+B n));
When described N geographical position is in the southeastern direction of described reference position, be (D1+B by described N video storage address in described memory n) D2 (D3+B n) (namely N memory address is (D1+B for the memory cell of D4 n) D2 (D3+B n) D4) and
The northwest being in described reference position when described N geographical position to time, be (D1+B by described N video storage address in described memory n) D2D3 (D4+B n) memory cell (namely N memory address is (D1+B n) D2D3 (D4+B n)).
6. the method for tracking target object video according to claim 5, it is characterized in that, the step of the summary traffic direction figure of the described target video object of described display comprises:
Retrieve the first frame occurring described target video object in each video of described multiple video;
The time order and function that described multiple first frames in more described multiple video occur; And
Produce the bitmap of the direction of motion representing described target video object, and on described bitmap, mark the direction of motion of described target video object according to described time comparative result.
CN201410720548.3A 2014-12-02 2014-12-02 A kind of method for tracking target video object Active CN105282496B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410720548.3A CN105282496B (en) 2014-12-02 2014-12-02 A kind of method for tracking target video object

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410720548.3A CN105282496B (en) 2014-12-02 2014-12-02 A kind of method for tracking target video object

Publications (2)

Publication Number Publication Date
CN105282496A true CN105282496A (en) 2016-01-27
CN105282496B CN105282496B (en) 2018-03-23

Family

ID=55150718

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410720548.3A Active CN105282496B (en) 2014-12-02 2014-12-02 A kind of method for tracking target video object

Country Status (1)

Country Link
CN (1) CN105282496B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306880A (en) * 2015-03-17 2016-02-03 四川浩特通信有限公司 Video concentration method
CN111801936A (en) * 2018-02-22 2020-10-20 爱峰株式会社 Doorbell system, location notification system and intercom system
CN112509000A (en) * 2020-11-20 2021-03-16 合肥市卓迩无人机科技服务有限责任公司 Moving target tracking algorithm for multi-path 4K quasi-real-time spliced video

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005309777A (en) * 2004-04-21 2005-11-04 Toyota Motor Corp Image processing method
CN101339664A (en) * 2008-08-27 2009-01-07 北京中星微电子有限公司 Object tracking method and system
US20090310820A1 (en) * 2006-06-16 2009-12-17 Bae Systems Plc Improvements relating to target tracking
CN103440668A (en) * 2013-08-30 2013-12-11 中国科学院信息工程研究所 Method and device for tracing online video target

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005309777A (en) * 2004-04-21 2005-11-04 Toyota Motor Corp Image processing method
US20090310820A1 (en) * 2006-06-16 2009-12-17 Bae Systems Plc Improvements relating to target tracking
CN101339664A (en) * 2008-08-27 2009-01-07 北京中星微电子有限公司 Object tracking method and system
CN103440668A (en) * 2013-08-30 2013-12-11 中国科学院信息工程研究所 Method and device for tracing online video target

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
王欢,任明武,杨静宇: "一种基于SMOG模型的红外目标跟踪新算法", 《红外与毫米波学报》 *
罗小波,范红旗,宋志勇,付强: "Passive target tracking with intermittent measurement based on random finite set", 《中南大学学报(英文版)》 *

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105306880A (en) * 2015-03-17 2016-02-03 四川浩特通信有限公司 Video concentration method
CN111801936A (en) * 2018-02-22 2020-10-20 爱峰株式会社 Doorbell system, location notification system and intercom system
CN112509000A (en) * 2020-11-20 2021-03-16 合肥市卓迩无人机科技服务有限责任公司 Moving target tracking algorithm for multi-path 4K quasi-real-time spliced video

Also Published As

Publication number Publication date
CN105282496B (en) 2018-03-23

Similar Documents

Publication Publication Date Title
US10850840B2 (en) Drone and rover preplacement for remote autonomous inspection of utility system components
US8970694B2 (en) Video processing system providing overlay of selected geospatially-tagged metadata relating to a geolocation outside viewable area and related methods
CN107167136B (en) Position recommendation method and system for electronic map
US20150302633A1 (en) Selecting time-distributed panoramic images for display
CN112101339B (en) Map interest point information acquisition method and device, electronic equipment and storage medium
US8913083B1 (en) System and method of generating a view for a point of interest
CN105260466A (en) Picture pushing method and apparatus
CN111107319B (en) Target tracking method, device and system based on regional camera
US20150198739A1 (en) Insolation calculating device, route proposing device, and insolation calculating method
EP2510690A1 (en) Video processing system providing correlation between objects in different georeferenced video feeds and related methods
EP2510503A1 (en) Video processing system providing enhanced tracking features for moving objects outside of a viewable window and related methods
US20110145257A1 (en) Video processing system generating corrected geospatial metadata for a plurality of georeferenced video feeds and related methods
CN105282496A (en) Method for tracking target video object
CN105306880A (en) Video concentration method
CN104320848B (en) The system and method for indoor positioning is realized based on cloud computing
CN104217414A (en) Method and device for extracting mosaicing line for image mosaic
CN104298678A (en) Method, system, device and server for searching for interest points on electronic map
CN105227902A (en) A kind of intelligence method for supervising under battle conditions
CN109726868B (en) Path planning method, device and storage medium
Lin et al. Moving camera analytics: Emerging scenarios, challenges, and applications
CN105721825A (en) Intelligent combat system
CN105323547A (en) Video condensing system
CN105323548A (en) Intelligent combat system
CN107301658A (en) A kind of method that unmanned plane image is positioned with extensive old times phase image Rapid matching
CN114943834B (en) Full-field Jing Yuyi segmentation method based on prototype queue learning under few labeling samples

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant