CN103942811A - Method and system for determining motion trajectory of characteristic object in distributed and parallel mode - Google Patents
Method and system for determining motion trajectory of characteristic object in distributed and parallel mode Download PDFInfo
- Publication number
- CN103942811A CN103942811A CN201310020404.2A CN201310020404A CN103942811A CN 103942811 A CN103942811 A CN 103942811A CN 201310020404 A CN201310020404 A CN 201310020404A CN 103942811 A CN103942811 A CN 103942811A
- Authority
- CN
- China
- Prior art keywords
- video
- feature target
- determined
- target
- feature
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Landscapes
- Image Analysis (AREA)
Abstract
The invention relates to a method and system for determining the motion trajectory of a characteristic object in a distributed and parallel mode. The method includes the steps that based on a distributed framework, a massive amount of video acquired by video acquisition devices installed in different geographical positions are analyzed in parallel so as to recognize the characteristic target with the trajectory to be determined; based on time stamps in recognized video frames containing the characteristic target and the information of positions where the video acquisition devices for acquiring the corresponding video frames are located, the times when the characteristic target occurs in the geographical positions for acquisition are recognized in parallel, and the characteristic target is correlated with the occurrence times of the characteristic target and the geographical positions where the characteristic target occurs; based on the correlation between the occurrence times of the characteristic target and the geographical positions where the characteristic target occurs obtained through statistics, the geographical positions where the characteristic target occurs are sequentially connected according to the sequence of the occurrence times of the same characteristic target, and thus the motion trajectory of the same characteristic target is formed. Through the method and system for determining the motion trajectory of a characteristic object in the distributed and parallel mode, a complete motion trajectory of a characteristic target can be automatically formed.
Description
Technical field
The disclosure relates to cloud computing field, especially, relates to the method and system that a kind of distributed parallel is determined feature target trajectory.
Background technology
Current all kinds of common monitoring system has all produced the surveillance video of magnanimity, how in the surveillance video of magnanimity fast query to intended target, and recognize fast the complete active track of this destination object, up to the present, prior art not yet provides a perfect solution.
When user need to be 2, while obtaining the active situation of certain target in the surveillance video of 000Tbit, at present conventional solution be analyzes and returns thousands of analysis results record or picture to user, user has no way of doing it during in the face of these large batch of records or picture, can only hope so many " result data ocean " and has heaved a sigh.
Summary of the invention
The disclosure has proposed new technical scheme in view of at least one in above problem.
A kind of method that the disclosure provides distributed parallel to determine feature target trajectory aspect one, it can form the entire motion track of feature target automatically.
The system that the disclosure provides a kind of distributed parallel to determine feature target trajectory at it on the other hand, it can form the entire motion track of feature target automatically.
According to the disclosure, provide a kind of distributed parallel to determine the method for feature target trajectory, comprising:
Based on distributed structure/architecture, to being arranged on the magnanimity video of the video acquisition device collection of diverse geographic location, carry out parallel parsing, to identify the feature target of track to be determined;
The positional information at the video acquisition device place of the timestamp in the frame of video that comprises feature target based on identifying and collection corresponding video frame walks abreast and identifies feature target at the time of occurrence of each gathering geographic position, and feature target is associated with the geographic position of its time of occurrence and appearance;
The time of occurrence of feature target based on counting and the incidence relation in the geographic position of appearance are connected the geographic position of its appearance successively according to the time of occurrence order of same feature target, to form the movement locus of same feature target.
In embodiment more of the present disclosure, to being arranged on the magnanimity video of the video acquisition device collection of diverse geographic location, carry out parallel parsing, to identify the step of the feature target of track to be determined, comprise:
Each video file in magnanimity video is resolved to the video pictures of Yi Zhengwei unit;
Every video pictures is carried out to graphical analysis, to identify feature target.
In embodiment more of the present disclosure, the positional information that gathers the video acquisition device place of corresponding video frame comprises longitude information and the latitude information of video acquisition device position.
In embodiment more of the present disclosure, the feature target of track to be determined is one or more.
In embodiment more of the present disclosure, the method also comprises:
In the situation that the feature target of track to be determined is a plurality of, the time of occurrence of different characteristic target identifying and the geographical location information of appearance are divided into groups;
In same group, according to the time of occurrence of same feature target, sequentially arrange the geographic position of its appearance.
According to the disclosure, also provide a kind of distributed parallel to determine the system of feature target trajectory, comprising:
A plurality of video acquisition devices, are arranged on different geographic position, for setting place acquisition characteristics target;
A plurality of video analysis device, for carrying out parallel parsing based on distributed structure/architecture to being arranged on the magnanimity video of a plurality of video acquisition device collections of diverse geographic location, to identify the feature target of track to be determined;
A plurality of information association devices, timestamp for the frame of video that comprises feature target based on identifying walks abreast and identifies feature target at the time of occurrence of each gathering geographic position with the positional information that gathers the video acquisition device place of corresponding video frame, and feature target is associated with the geographic position of its time of occurrence and appearance;
Movement locus drawing apparatus; for the time of occurrence of feature target that counts based on a plurality of information association devices, according to the time of occurrence of same feature target, be sequentially connected successively the geographic position of its appearance with the incidence relation in the geographic position of appearance, to form the movement locus of same feature target.
In embodiment more of the present disclosure, each video analysis device in a plurality of video analysis device comprises:
Video resolution unit, for resolving to each video file of magnanimity video the video pictures of Yi Zhengwei unit;
Image analyzing unit, for every video pictures is carried out to graphical analysis, to identify feature target.
In embodiment more of the present disclosure, the positional information that gathers the video acquisition device place of corresponding video frame comprises longitude information and the latitude information of video acquisition device position.
In embodiment more of the present disclosure, the feature target of track to be determined is one or more.
In embodiment more of the present disclosure, this system also comprises:
Feature targeted packets device, in the situation that the feature target of track to be determined is a plurality of, divides into groups to the time of occurrence of different characteristic target identifying and the geographical location information of appearance; In same group, according to the time of occurrence of same feature target, sequentially arrange the geographic position of its appearance.
By technical scheme of the present disclosure, owing to utilizing multi-node parallel, process the magnanimity video that each video acquisition device gathers, processing processing power and treatment effeciency have been improved, the time of occurrence of feature target counting based on each node and the incidence relation in the geographic position of appearance are connected according to the time of occurrence order of each position successively, thereby form the movement locus of same feature target, and then be convenient to analyst the active situation of this feature target is monitored.
Accompanying drawing explanation
Accompanying drawing described herein is used to provide further understanding of the disclosure, forms the application's a part.In the accompanying drawings:
Fig. 1 is the schematic flow sheet that the distributed parallel of an embodiment of the disclosure is determined the method for feature target trajectory.
Fig. 2 is the schematic flow sheet that the distributed parallel of another embodiment of the disclosure is determined the method for feature target trajectory.
Fig. 3 is a movement locus schematic diagram with the automobile of certain appointment car plate that utilizes disclosure technical scheme to draw.
Fig. 4 is the structural representation that the distributed parallel of an embodiment of the disclosure is determined the system of feature target trajectory.
Fig. 5 is the structural representation that the distributed parallel of another embodiment of the disclosure is determined the system of feature target trajectory.
Fig. 6 is the structural representation that the distributed parallel of the another embodiment of the disclosure is determined the system of feature target trajectory.
Embodiment
The disclosure is described below with reference to accompanying drawings.Be noted that following being described in is only explanatory and exemplary in essence, never as any restriction to the disclosure and application or use.Unless stated otherwise, otherwise the parts of setting forth in an embodiment and the positioned opposite of step and numeral expression formula and numerical value do not limit the scope of the present disclosure.In addition, technology well known by persons skilled in the art, method and apparatus may not be discussed in detail, but are intended in appropriate circumstances become a part for instructions.
The following embodiment of the disclosure carries out parallel video data analysis to magnanimity video file: " time of occurrence " of parallel recognition feature target, parallel each feature target of identification and " geographic coordinate " information occurring, the recorded information of the feature target that analysis obtains according to parallel video automatically forms the complete active track of this feature target, and then realize the intelligence of feature target is followed the trail of, be convenient to analyst the active situation of this feature target is monitored.
Fig. 1 is the schematic flow sheet that the distributed parallel of an embodiment of the disclosure is determined the method for feature target trajectory.
As shown in Figure 1, this embodiment can comprise the following steps:
S102, carries out parallel parsing based on distributed structure/architecture to the magnanimity video that is arranged on the video acquisition device of diverse geographic location and gathers, to identify the feature target of movement locus to be determined;
Particularly, can dispose a plurality of video acquisition devices in the different location within the scope of feature target travel, to guarantee the movement locus of complete tracking characteristics target, these video acquisition devices are according to the frequency acquisition Real-time Collection video image of setting, again by Real-time Collection to video image be synchronized to the video analysis device connected with corresponding video acquisition device, to carry out data analysis.
Owing to being parallel processing, so each video analysis device also walks abreast to the processing of video image, without unified processing again after all data acquisitions such as prior art.
When feature target is identified, crucial is the extraction to each target signature in object to be identified.In general, the extraction of target signature take extract target physical features as main, comprise shape facility, gray distribution features, motion feature and the image sequence characteristic etc. of target.The different characteristic of target should guarantee the feature invariance of target in the situations such as convergent-divergent, rotation, translation, guarantees that again different objects is had to obvious feature difference, can carry out good differentiation.This embodiment can adopt existing image recognition algorithms such as correlation method, template method of identification, Moment invariants, sciagraphy to identify the feature target of movement locus to be determined.In addition, the image recognition algorithm that can adopt can also comprise take target invariant features as the basic NMI(Normalized Moment of Inertia of identification, normalization moment of inertia) feature identification, invariant moment features identification and the identification of ratio feature.NMI feature identification is to take the normalization moment of inertia of computed image to know method for distinguishing as invariant features carries out target; Invariant moment features identification be by computed image 7 not bending moment mate searching order calibration method; The identification of ratio feature is extract the form factor of image and with this, carry out target and know method for distinguishing.
S104, the positional information at the video acquisition device place of the timestamp in the frame of video that comprises feature target based on identifying and collection corresponding video frame walks abreast and identifies feature target at the time of occurrence of each gathering geographic position, and feature target is associated with the geographic position of its time of occurrence and appearance;
Owing to all comprising the timestamp that gathers this frame of video in each frame of video, therefore, can know the time of occurrence of the feature target comprising in this frame of video.Because each video acquisition device is arranged on fixing position, therefore, the video image that can be gathered by same video acquisition device has identical geographical location information again.
Video acquisition device by gathered transmission of video images in video analysis device, video analysis device also carries the identification information of video acquisition device, so that can be known according to the identification information of this video acquisition device the geographical location information of institute's receiver, video image.
S106, the time of occurrence of feature target based on counting and the incidence relation in the geographic position of appearance are connected the geographic position of its appearance successively according to the time of occurrence order of same feature target, to form the movement locus of same feature target.
This embodiment is owing to utilizing multi-node parallel to process the magnanimity video that each video acquisition device gathers, processing processing power and treatment effeciency have been improved, the time of occurrence of feature target counting based on each node and the incidence relation in the geographic position of appearance are connected according to the time of occurrence order of each position successively, thereby form the movement locus of same feature target, and then be convenient to analyst the active situation of this feature target is monitored.
In step S102, to being arranged on the magnanimity video of the video acquisition device collection of diverse geographic location, carry out parallel parsing, to identify the step of the feature target of track to be determined, can comprise:
Each video file in magnanimity video is resolved to the video pictures of Yi Zhengwei unit;
Every video pictures is carried out to graphical analysis, to identify feature target.
Particularly, first magnanimity video is resolved to the picture of a frame frame, then, based on every pictures, carry out graphical analysis: for example, first identify license plate area in picture or human face region scope etc., and then within the scope of this, identify the number-plate number, Chinese character or face characteristic etc.
It is to be noted, before step S104, positional information that can pre-stored each video acquisition device place, the positional information at video acquisition device place comprises longitude information and the latitude information of video acquisition device position, can jointly determine the geographic position of each video acquisition device by longitude information and latitude information, correspondingly, the feature target comprising in the video image being gathered by video acquisition device has also just had the latitude and longitude information identical with corresponding video acquisition device.
In addition, in above-described embodiment, the feature target of track to be determined can be for one or more.In the situation that the feature target of track to be determined is a plurality of, can divide into groups to the time of occurrence of different characteristic target identifying and the geographical location information of appearance; Then, in same group, then according to the time of occurrence of same feature target, sequentially arrange the geographical location information of its appearance.Like this, can identify a plurality of different feature targets by a parallel video analysis, for example, can analyze according to the car plate identifying the movement locus of many automobiles simultaneously.
Fig. 2 is the schematic flow sheet that the distributed parallel of another embodiment of the disclosure is determined the method for feature target trajectory.
As shown in Figure 2, this embodiment can comprise the following steps:
S202, each video analysis node carries out parallel parsing to the video data from corresponding video acquisition node, particularly, can use distributed computing framework, in conjunction with existing video analysis algorithm, massive video data be carried out to distributed parallel analysis, to identify feature target, by video analysis, can identify all feature targets in magnanimity video;
S204, according to " time of occurrence " information of each feature target of the parallel identification of the timestamp information carrying in each frame of video;
S206, in identifying the frame of video of " time of occurrence " information, according to parallel " geographic coordinate " information that identifies the feature target gathering in certain time of positional information that gathers the video acquisition node of these frame of video, for example, can and obtain geographic coordinate information corresponding to " occurring place " of each feature target in conjunction with the geographic coordinate analysis of video acquisition node;
S208, the feature target that each video analysis node recognition is gone out, feature target time of occurrence and feature target occur that the incidence relation of location information is aggregated into " feature targeted packets ", certainly " the feature target " that in all magnanimity videos, identify gathered automatically, again all relevant informations of same feature target are divided in same group, and then obtain a lot of the groupings of the related information for different characteristic target;
S210, generate " feature target trajectory ", according to these geographic coordinates in same feature targeted packets, in electronic chart, identify " coordinate points ", obtain " recognition result coordinate points ", then " recognition result coordinate points " connected into trajectory automatically by the time order and function order of its appearance, thereby on electronic chart, generate " the feature target trajectory " of complete and accurate.
This embodiment has solved the serial restriction in data handling procedure, promoted data-switching efficiency, solved data importing Bottleneck Restrictions, improved the efficiency of data importing; Also by a plurality of data access points, realize the parallel query of mass data simultaneously, saved the data query time.
Next, take and identify license plate as example, the movement locus that automatic drafting is equipped with to the automobile of this license plate is described in detail.
Step 1 identifies the car plate of certain automobile from magnanimity video file by video analysis,, find out the frame picture of same automotive license plate in all videos that is.
Because video file is comprised of many frame pictures, generally in the video file of magnanimity, can find some (for example to there is feature target, license plate) frame picture, movement locus for location feature target, first all frame pictures that comprise a certain feature target are all found out, are carried out following pre-service:
(1) from the information prestoring, extract the relevant location information of each camera, as described in Table 1:
Camera numbering | Camera geographic longitude parameter | Camera geographic latitude parameter |
… | … | … |
Table 1
In table 1, listed the geographical location information of each camera.
(2) from each video file from camera, read following field information:
Video file numbering | Corresponding camera numbering |
… | … |
Table 2
Based on above-mentioned table 2, video file can be joined with the geographical longitude and latitude parameter correlation of corresponding camera, and then can learn the corresponding geographical location information of every width frame picture in corresponding video file according to the numbering of camera.
(3) from each frame picture, extract following field information:
Feature target designation | Frame picture numbering | Corresponding video file numbering | Timestamp |
… | … | … | … |
Table 3
Because above-mentioned table 3 is associated feature target with timestamp and corresponding video file numbering, the corresponding relation of camera numbering and the geographical longitude and latitude parameter of camera in corresponding relation between numbering with corresponding camera based on video file numbering in table 2 again and table 1, can draw the corresponding relation between the geographical longitude and latitude parameter of feature target, timestamp and camera, as described in Table 4:
Feature target designation | Timestamp | Camera geographic longitude parameter | Camera geographic latitude parameter |
… | … | … | … |
Table 4
Feature target can be the licence plate of certain automobile, for example, Shanghai A10001, after this license plate is identified by video analysis as feature target, this feature target of random coded is: 00001, and find out corresponding frame picture, below take and in magnanimity video, identify 5 frame pictures with this license plate and describe as example:
Table 5
Step 2, finds out corresponding camera numbering according to video file numbering corresponding to every width frame picture, as described in Table 6:
Video file numbering | Corresponding camera numbering |
003-10000 | 003 |
006-10002 | 006 |
009-10003 | 009 |
011-10004 | 011 |
014-10005 | 014 |
Table 6
Step 3, finds out corresponding geographical location information according to the camera numbering inquiring, as described in Table 7:
Table 7
Again feature target designation, timestamp are associated with camera geographic position, obtain information as described in Table 8:
Table 8
Step 4, according to connecting corresponding geographic position longitude and latitude according to the timestamps ordering shown in above-mentioned table 8, and then can depict the movement locus that this has the automobile of above-mentioned licence plate, as shown in Figure 3.
One of ordinary skill in the art will appreciate that, whole and the part steps that realizes said method embodiment can complete by the relevant hardware of programmed instruction, aforesaid program can be stored in a computing equipment read/write memory medium, this program is when carrying out, execution comprises the step of said method embodiment, and aforesaid storage medium can comprise the various media that can be program code stored such as ROM, RAM, magnetic disc and CD.
Fig. 4 is the structural representation that the distributed parallel of an embodiment of the disclosure is determined the system of feature target trajectory.
As shown in Figure 4, the system 40 in this embodiment can comprise a plurality of video acquisition devices 402, a plurality of video analysis device 404, a plurality of information association device 406 and movement locus drawing apparatus 408.Wherein,
A plurality of video acquisition devices 402, are arranged on different geographic position, for setting place acquisition characteristics target;
A plurality of video analysis device 404, for carrying out parallel parsing based on distributed structure/architecture to being arranged on the magnanimity video of a plurality of video acquisition device collections of diverse geographic location, to identify the feature target of track to be determined;
A plurality of information association devices 406, timestamp for the frame of video that comprises feature target based on identifying walks abreast and identifies feature target at the time of occurrence of each gathering geographic position with the positional information that gathers the video acquisition device place of corresponding video frame, and feature target is associated with the geographic position of its time of occurrence and appearance;
Movement locus drawing apparatus 408; for the time of occurrence of feature target that counts based on a plurality of information association devices, according to the time of occurrence of same feature target, be sequentially connected successively the geographic position of its appearance with the incidence relation in the geographic position of appearance, to form the movement locus of same feature target.
This embodiment is owing to utilizing multi-node parallel to process the magnanimity video that each video acquisition device gathers, processing processing power and treatment effeciency have been improved, the time of occurrence of feature target counting based on each node and the incidence relation in the geographic position of appearance are connected according to the time of occurrence order of each position successively, thereby form the movement locus of same feature target, and then be convenient to analyst the active situation of this feature target is monitored.
Fig. 5 is the structural representation that the distributed parallel of another embodiment of the disclosure is determined the system of feature target trajectory.
As shown in Figure 5, compare with the embodiment in Fig. 4, distributed parallel in this embodiment determines that each video analysis device in a plurality of video analysis device 502 in the system 50 of feature target trajectory includes video resolution unit 502a and image analyzing unit 502b, wherein
Video resolution unit 502a, for resolving to each video file of magnanimity video the video pictures of Yi Zhengwei unit;
Image analyzing unit 502b, for every video pictures is carried out to graphical analysis, to identify feature target.
Fig. 6 is the structural representation that the distributed parallel of the another embodiment of the disclosure is determined the system of feature target trajectory.
As shown in Figure 6, compare with the embodiment in Fig. 4, the distributed parallel in this embodiment determines that the system 60 of feature target trajectory can also comprise:
Feature targeted packets device 602, in the situation that the feature target of track to be determined is a plurality of, divides into groups to the time of occurrence of different characteristic target identifying and the geographical location information of appearance; In same group, according to the time of occurrence of same feature target, sequentially arrange the geographic position of its appearance.
In addition, the positional information at the video acquisition device place of collection corresponding video frame comprises longitude information and the latitude information of video acquisition device position.
The feature target that it is pointed out that track to be determined is one or more.
In this instructions, each embodiment all adopts the mode of going forward one by one to describe, and what each embodiment stressed is and the difference of other embodiment that part identical with similar between each embodiment can cross-references.For device embodiment, because it is substantially similar to embodiment of the method, so description is fairly simple, relevant part can be referring to the explanation of embodiment of the method part.
Disclosure above-described embodiment does not need through extra artificial treatment work, can automatically realize the functions such as car tracing, personnel's tracking or item tracing in magnanimity video.
The present invention passes through, and multi-node parallel disposal route will improve the treatment effeciency of magnanimity video file greatly, improves telecommunications monitoring analysis ability to massive video data in global eyes monitoring.
Although described the disclosure with reference to exemplary embodiment, should be understood that the disclosure is not limited to above-mentioned exemplary embodiment.It will be obvious to those skilled in the art that and can under the condition that does not deviate from the scope of the present disclosure and spirit, revise above-mentioned exemplary embodiment.The scope of appended claim should be endowed the widest explanation, with the 26S Proteasome Structure and Function that comprises all such modifications and be equal to.
Claims (10)
1. distributed parallel is determined a method for feature target trajectory, it is characterized in that, comprising:
Based on distributed structure/architecture, to being arranged on the magnanimity video of the video acquisition device collection of diverse geographic location, carry out parallel parsing, to identify the feature target of track to be determined;
The positional information at the video acquisition device place of the timestamp in the frame of video that comprises feature target based on identifying and collection corresponding video frame walks abreast and identifies feature target at the time of occurrence of each gathering geographic position, and feature target is associated with the geographic position of its time of occurrence and appearance;
The time of occurrence of feature target based on counting and the incidence relation in the geographic position of appearance are connected the geographic position of its appearance successively according to the time of occurrence order of same feature target, to form the movement locus of same feature target.
2. distributed parallel according to claim 1 is determined the method for feature target trajectory, it is characterized in that, describedly to being arranged on the magnanimity video of the video acquisition device collection of diverse geographic location, carry out parallel parsing, to identify the step of the feature target of track to be determined, comprise:
Each video file in magnanimity video is resolved to the video pictures of Yi Zhengwei unit;
Every video pictures is carried out to graphical analysis, to identify feature target.
3. distributed parallel according to claim 1 is determined the method for feature target trajectory, it is characterized in that, the positional information at the video acquisition device place of described collection corresponding video frame comprises longitude information and the latitude information of video acquisition device position.
4. distributed parallel according to claim 1 is determined the method for feature target trajectory, it is characterized in that, the feature target of described track to be determined is one or more.
5. distributed parallel according to claim 1 is determined the method for feature target trajectory, it is characterized in that, described method also comprises:
In the situation that the feature target of described track to be determined is a plurality of, the time of occurrence of different characteristic target identifying and the geographical location information of appearance are divided into groups;
In same group, according to the time of occurrence of same feature target, sequentially arrange the geographic position of its appearance.
6. distributed parallel is determined a system for feature target trajectory, it is characterized in that, comprising:
A plurality of video acquisition devices, are arranged on different geographic position, for setting place acquisition characteristics target;
A plurality of video analysis device, for carrying out parallel parsing based on distributed structure/architecture to being arranged on the magnanimity video of a plurality of video acquisition device collections of diverse geographic location, to identify the feature target of track to be determined;
A plurality of information association devices, timestamp for the frame of video that comprises feature target based on identifying walks abreast and identifies feature target at the time of occurrence of each gathering geographic position with the positional information that gathers the video acquisition device place of corresponding video frame, and feature target is associated with the geographic position of its time of occurrence and appearance;
Movement locus drawing apparatus; for the time of occurrence of feature target that counts based on described a plurality of information association devices, according to the time of occurrence of same feature target, be sequentially connected successively the geographic position of its appearance with the incidence relation in the geographic position of appearance, to form the movement locus of same feature target.
7. distributed parallel according to claim 6 is determined the system of feature target trajectory, it is characterized in that, each video analysis device in described a plurality of video analysis device comprises:
Video resolution unit, for resolving to each video file of magnanimity video the video pictures of Yi Zhengwei unit;
Image analyzing unit, for every video pictures is carried out to graphical analysis, to identify feature target.
8. distributed parallel according to claim 6 is determined the system of feature target trajectory, it is characterized in that, the positional information at the video acquisition device place of described collection corresponding video frame comprises longitude information and the latitude information of video acquisition device position.
9. distributed parallel according to claim 6 is determined the system of feature target trajectory, it is characterized in that, the feature target of described track to be determined is one or more.
10. distributed parallel according to claim 6 is determined the system of feature target trajectory, it is characterized in that, described system also comprises:
Feature targeted packets device, in the situation that the feature target of described track to be determined is a plurality of, divides into groups to the time of occurrence of different characteristic target identifying and the geographical location information of appearance; In same group, according to the time of occurrence of same feature target, sequentially arrange the geographic position of its appearance.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310020404.2A CN103942811B (en) | 2013-01-21 | 2013-01-21 | Distributed parallel determines the method and system of characteristic target movement locus |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201310020404.2A CN103942811B (en) | 2013-01-21 | 2013-01-21 | Distributed parallel determines the method and system of characteristic target movement locus |
Publications (2)
Publication Number | Publication Date |
---|---|
CN103942811A true CN103942811A (en) | 2014-07-23 |
CN103942811B CN103942811B (en) | 2017-08-15 |
Family
ID=51190462
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201310020404.2A Active CN103942811B (en) | 2013-01-21 | 2013-01-21 | Distributed parallel determines the method and system of characteristic target movement locus |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN103942811B (en) |
Cited By (32)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104301677A (en) * | 2014-10-16 | 2015-01-21 | 北京十方慧通科技有限公司 | Panoramic video monitoring method and device orienting large-scale scenes |
CN104484457A (en) * | 2014-12-29 | 2015-04-01 | 广州中国科学院软件应用技术研究所 | Method and system for extracting and searching moving object in parallel video |
CN104539909A (en) * | 2015-01-15 | 2015-04-22 | 安徽大学 | Video monitoring method and video monitoring server |
CN104933136A (en) * | 2015-06-15 | 2015-09-23 | 北方工业大学 | Dynamic car-pooling method and system based on automatic number plate recognition data |
CN105227902A (en) * | 2014-12-02 | 2016-01-06 | 四川浩特通信有限公司 | A kind of intelligence method for supervising under battle conditions |
CN105279766A (en) * | 2014-12-02 | 2016-01-27 | 四川浩特通信有限公司 | Method for monitoring target video object |
CN105306880A (en) * | 2015-03-17 | 2016-02-03 | 四川浩特通信有限公司 | Video concentration method |
CN105323548A (en) * | 2015-03-17 | 2016-02-10 | 四川浩特通信有限公司 | Intelligent combat system |
CN105323547A (en) * | 2015-03-17 | 2016-02-10 | 四川浩特通信有限公司 | Video condensing system |
CN105589875A (en) * | 2014-10-22 | 2016-05-18 | 方正国际软件(北京)有限公司 | Multi-trajectory drawing method and device |
CN105721825A (en) * | 2014-12-02 | 2016-06-29 | 四川浩特通信有限公司 | Intelligent combat system |
CN105721826A (en) * | 2014-12-02 | 2016-06-29 | 四川浩特通信有限公司 | Intelligent combat system |
CN105989610A (en) * | 2015-01-29 | 2016-10-05 | 中兴通讯股份有限公司 | Target tracking method and device |
WO2016202027A1 (en) * | 2015-06-18 | 2016-12-22 | 中兴通讯股份有限公司 | Object movement trajectory recognition method and system |
CN106446002A (en) * | 2016-08-01 | 2017-02-22 | 三峡大学 | Moving target-based video retrieval method for track in map |
CN107016374A (en) * | 2017-04-12 | 2017-08-04 | 电子科技大学 | Intelligent Measurement tracking and the generation method of space-time track towards specific objective |
CN107315755A (en) * | 2016-04-27 | 2017-11-03 | 杭州海康威视数字技术股份有限公司 | The orbit generation method and device of query object |
CN107360394A (en) * | 2017-06-16 | 2017-11-17 | 河北汉光重工有限责任公司 | More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system |
CN107358146A (en) * | 2017-05-22 | 2017-11-17 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN107527464A (en) * | 2016-06-22 | 2017-12-29 | 杭州海康威视数字技术股份有限公司 | A kind of determination method and device of movement locus |
CN108038176A (en) * | 2017-12-07 | 2018-05-15 | 浙江大华技术股份有限公司 | A kind of method for building up, device, electronic equipment and the medium in passerby storehouse |
CN108417047A (en) * | 2018-05-10 | 2018-08-17 | 杭州盈蝶科技有限公司 | A kind of vehicle location method for tracing and its system |
CN108769576A (en) * | 2018-05-10 | 2018-11-06 | 郑州信大先进技术研究院 | intelligent video processing method and system |
CN108875466A (en) * | 2017-06-01 | 2018-11-23 | 北京旷视科技有限公司 | Monitoring method, monitoring system and storage medium based on recognition of face |
CN109410278A (en) * | 2017-08-15 | 2019-03-01 | 杭州海康威视数字技术股份有限公司 | A kind of object localization method, apparatus and system |
CN111475670A (en) * | 2019-11-26 | 2020-07-31 | 北京明略软件系统有限公司 | Track display method and device and storage medium |
CN111723597A (en) * | 2019-03-18 | 2020-09-29 | 深圳市速腾聚创科技有限公司 | Precision detection method and device of tracking algorithm, computer equipment and storage medium |
CN111723826A (en) * | 2019-03-18 | 2020-09-29 | 深圳市速腾聚创科技有限公司 | Precision detection method and device of tracking algorithm, computer equipment and storage medium |
CN112818172A (en) * | 2021-01-18 | 2021-05-18 | 浙江大华技术股份有限公司 | Video retrieval method, electronic device and storage medium |
CN112905824A (en) * | 2021-02-08 | 2021-06-04 | 智慧眼科技股份有限公司 | Target vehicle tracking method and device, computer equipment and storage medium |
CN113066182A (en) * | 2021-04-16 | 2021-07-02 | 上海安畅网络科技股份有限公司 | Information display method and device, electronic equipment and storage medium |
CN114199883A (en) * | 2021-12-06 | 2022-03-18 | 广州市斯睿特智能科技有限公司 | 3D gluing detection method and device and application thereof |
Families Citing this family (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN109727348A (en) * | 2018-12-07 | 2019-05-07 | 国网江苏省电力有限公司南京供电分公司 | A kind of electric power tunnel personnel monitoring method |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009200601A (en) * | 2008-02-19 | 2009-09-03 | Victor Co Of Japan Ltd | Video editing device and method |
CN101807111A (en) * | 2009-02-13 | 2010-08-18 | 夏普株式会社 | Information apparatus, control method and system thereof |
CN101930779A (en) * | 2010-07-29 | 2010-12-29 | 华为终端有限公司 | Video commenting method and video player |
CN201718000U (en) * | 2010-07-16 | 2011-01-19 | 西安秦码软件科技有限公司 | Intelligent dynamic field video monitoring system |
CN102339125A (en) * | 2010-07-23 | 2012-02-01 | 夏普株式会社 | Information equipment and control method and system thereof |
CN202183162U (en) * | 2011-07-26 | 2012-04-04 | 四川中唯交通科技有限公司 | Highway condition real-time monitoring device |
-
2013
- 2013-01-21 CN CN201310020404.2A patent/CN103942811B/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2009200601A (en) * | 2008-02-19 | 2009-09-03 | Victor Co Of Japan Ltd | Video editing device and method |
CN101807111A (en) * | 2009-02-13 | 2010-08-18 | 夏普株式会社 | Information apparatus, control method and system thereof |
CN201718000U (en) * | 2010-07-16 | 2011-01-19 | 西安秦码软件科技有限公司 | Intelligent dynamic field video monitoring system |
CN102339125A (en) * | 2010-07-23 | 2012-02-01 | 夏普株式会社 | Information equipment and control method and system thereof |
CN101930779A (en) * | 2010-07-29 | 2010-12-29 | 华为终端有限公司 | Video commenting method and video player |
CN202183162U (en) * | 2011-07-26 | 2012-04-04 | 四川中唯交通科技有限公司 | Highway condition real-time monitoring device |
Cited By (45)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN104301677B (en) * | 2014-10-16 | 2018-06-15 | 北京十方慧通科技有限公司 | The method and device monitored towards the panoramic video of large scene |
CN104301677A (en) * | 2014-10-16 | 2015-01-21 | 北京十方慧通科技有限公司 | Panoramic video monitoring method and device orienting large-scale scenes |
CN105589875A (en) * | 2014-10-22 | 2016-05-18 | 方正国际软件(北京)有限公司 | Multi-trajectory drawing method and device |
CN105589875B (en) * | 2014-10-22 | 2019-10-25 | 方正国际软件(北京)有限公司 | A kind of method and device that multi-trace is drawn |
CN105227902A (en) * | 2014-12-02 | 2016-01-06 | 四川浩特通信有限公司 | A kind of intelligence method for supervising under battle conditions |
CN105279766A (en) * | 2014-12-02 | 2016-01-27 | 四川浩特通信有限公司 | Method for monitoring target video object |
CN105227902B (en) * | 2014-12-02 | 2018-11-23 | 四川浩特通信有限公司 | A kind of intelligence actual combat monitoring method |
CN105721825B (en) * | 2014-12-02 | 2018-07-03 | 四川浩特通信有限公司 | A kind of intelligence actual combat system |
CN105279766B (en) * | 2014-12-02 | 2018-04-24 | 四川浩特通信有限公司 | A kind of method for monitoring target video object |
CN105721825A (en) * | 2014-12-02 | 2016-06-29 | 四川浩特通信有限公司 | Intelligent combat system |
CN105721826A (en) * | 2014-12-02 | 2016-06-29 | 四川浩特通信有限公司 | Intelligent combat system |
CN105721826B (en) * | 2014-12-02 | 2018-06-12 | 四川浩特通信有限公司 | A kind of intelligence actual combat system |
CN104484457A (en) * | 2014-12-29 | 2015-04-01 | 广州中国科学院软件应用技术研究所 | Method and system for extracting and searching moving object in parallel video |
CN104539909A (en) * | 2015-01-15 | 2015-04-22 | 安徽大学 | Video monitoring method and video monitoring server |
CN105989610A (en) * | 2015-01-29 | 2016-10-05 | 中兴通讯股份有限公司 | Target tracking method and device |
CN105323548B (en) * | 2015-03-17 | 2018-05-15 | 四川浩特通信有限公司 | A kind of intelligence actual combat system |
CN105306880A (en) * | 2015-03-17 | 2016-02-03 | 四川浩特通信有限公司 | Video concentration method |
CN105323548A (en) * | 2015-03-17 | 2016-02-10 | 四川浩特通信有限公司 | Intelligent combat system |
CN105323547A (en) * | 2015-03-17 | 2016-02-10 | 四川浩特通信有限公司 | Video condensing system |
CN105323547B (en) * | 2015-03-17 | 2018-05-15 | 四川浩特通信有限公司 | A kind of video concentration systems |
CN104933136A (en) * | 2015-06-15 | 2015-09-23 | 北方工业大学 | Dynamic car-pooling method and system based on automatic number plate recognition data |
CN104933136B (en) * | 2015-06-15 | 2019-05-03 | 北方工业大学 | Dynamic share-car method and system based on magnanimity license auto-recognition system data |
WO2016202027A1 (en) * | 2015-06-18 | 2016-12-22 | 中兴通讯股份有限公司 | Object movement trajectory recognition method and system |
CN107315755A (en) * | 2016-04-27 | 2017-11-03 | 杭州海康威视数字技术股份有限公司 | The orbit generation method and device of query object |
CN107527464A (en) * | 2016-06-22 | 2017-12-29 | 杭州海康威视数字技术股份有限公司 | A kind of determination method and device of movement locus |
CN106446002A (en) * | 2016-08-01 | 2017-02-22 | 三峡大学 | Moving target-based video retrieval method for track in map |
CN107016374A (en) * | 2017-04-12 | 2017-08-04 | 电子科技大学 | Intelligent Measurement tracking and the generation method of space-time track towards specific objective |
CN107358146B (en) * | 2017-05-22 | 2018-05-22 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN107358146A (en) * | 2017-05-22 | 2017-11-17 | 深圳云天励飞技术有限公司 | Method for processing video frequency, device and storage medium |
CN108875466A (en) * | 2017-06-01 | 2018-11-23 | 北京旷视科技有限公司 | Monitoring method, monitoring system and storage medium based on recognition of face |
CN107360394B (en) * | 2017-06-16 | 2019-09-27 | 河北汉光重工有限责任公司 | More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system |
CN107360394A (en) * | 2017-06-16 | 2017-11-17 | 河北汉光重工有限责任公司 | More preset point dynamic and intelligent monitoring methods applied to frontier defense video monitoring system |
CN109410278A (en) * | 2017-08-15 | 2019-03-01 | 杭州海康威视数字技术股份有限公司 | A kind of object localization method, apparatus and system |
CN108038176A (en) * | 2017-12-07 | 2018-05-15 | 浙江大华技术股份有限公司 | A kind of method for building up, device, electronic equipment and the medium in passerby storehouse |
CN108769576B (en) * | 2018-05-10 | 2021-02-02 | 郑州信大先进技术研究院 | Intelligent video processing method and system |
CN108417047A (en) * | 2018-05-10 | 2018-08-17 | 杭州盈蝶科技有限公司 | A kind of vehicle location method for tracing and its system |
CN108769576A (en) * | 2018-05-10 | 2018-11-06 | 郑州信大先进技术研究院 | intelligent video processing method and system |
CN111723597A (en) * | 2019-03-18 | 2020-09-29 | 深圳市速腾聚创科技有限公司 | Precision detection method and device of tracking algorithm, computer equipment and storage medium |
CN111723826A (en) * | 2019-03-18 | 2020-09-29 | 深圳市速腾聚创科技有限公司 | Precision detection method and device of tracking algorithm, computer equipment and storage medium |
CN111723826B (en) * | 2019-03-18 | 2023-08-29 | 深圳市速腾聚创科技有限公司 | Method, device, computer equipment and storage medium for detecting precision of tracking algorithm |
CN111475670A (en) * | 2019-11-26 | 2020-07-31 | 北京明略软件系统有限公司 | Track display method and device and storage medium |
CN112818172A (en) * | 2021-01-18 | 2021-05-18 | 浙江大华技术股份有限公司 | Video retrieval method, electronic device and storage medium |
CN112905824A (en) * | 2021-02-08 | 2021-06-04 | 智慧眼科技股份有限公司 | Target vehicle tracking method and device, computer equipment and storage medium |
CN113066182A (en) * | 2021-04-16 | 2021-07-02 | 上海安畅网络科技股份有限公司 | Information display method and device, electronic equipment and storage medium |
CN114199883A (en) * | 2021-12-06 | 2022-03-18 | 广州市斯睿特智能科技有限公司 | 3D gluing detection method and device and application thereof |
Also Published As
Publication number | Publication date |
---|---|
CN103942811B (en) | 2017-08-15 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN103942811A (en) | Method and system for determining motion trajectory of characteristic object in distributed and parallel mode | |
CN110334111B (en) | Multidimensional track analysis method and device | |
CN109783685B (en) | Query method and device | |
TWI425454B (en) | Method, system and computer program product for reconstructing moving path of vehicle | |
CN116188821B (en) | Copyright detection method, system, electronic device and storage medium | |
CN109656973B (en) | Target object association analysis method and device | |
CN105631418A (en) | People counting method and device | |
CN104303193A (en) | Clustering-based object classification | |
CN112257660B (en) | Method, system, equipment and computer readable storage medium for removing invalid passenger flow | |
CN111107319B (en) | Target tracking method, device and system based on regional camera | |
CN112770265B (en) | Pedestrian identity information acquisition method, system, server and storage medium | |
CN112329691A (en) | Monitoring video analysis method and device, electronic equipment and storage medium | |
US20220301317A1 (en) | Method and device for constructing object motion trajectory, and computer storage medium | |
WO2021114985A1 (en) | Companionship object identification method and apparatus, server and system | |
CN109117771B (en) | System and method for detecting violence events in image based on anchor nodes | |
CN116311063A (en) | Personnel fine granularity tracking method and system based on face recognition under monitoring video | |
CN111065044B (en) | Big data based data association analysis method and device and computer storage medium | |
CN113420679B (en) | Artificial intelligence cross-camera multi-target tracking system and tracking method | |
CN112637548B (en) | Information association early warning method and device based on camera | |
CN202306549U (en) | Video retrieval system based on optical flow method | |
CN114724378B (en) | Vehicle tracking statistical system and method based on deep learning | |
CN114821482A (en) | Vector topology integrated passenger flow calculation method and system based on fisheye probe | |
CN111553408B (en) | Automatic test method for video recognition software | |
CN114494355A (en) | Trajectory analysis method and device based on artificial intelligence, terminal equipment and medium | |
JP2022534314A (en) | Picture-based multi-dimensional information integration method and related equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
C06 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |