CN108769598A - Across the camera video method for concentration identified again based on pedestrian - Google Patents

Across the camera video method for concentration identified again based on pedestrian Download PDF

Info

Publication number
CN108769598A
CN108769598A CN201810584488.5A CN201810584488A CN108769598A CN 108769598 A CN108769598 A CN 108769598A CN 201810584488 A CN201810584488 A CN 201810584488A CN 108769598 A CN108769598 A CN 108769598A
Authority
CN
China
Prior art keywords
video
concentration
pedestrian
target
target object
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810584488.5A
Other languages
Chinese (zh)
Inventor
颜波
李可
林楚铭
马晨曦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Fudan University
Original Assignee
Fudan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Fudan University filed Critical Fudan University
Priority to CN201810584488.5A priority Critical patent/CN108769598A/en
Publication of CN108769598A publication Critical patent/CN108769598A/en
Pending legal-status Critical Current

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • G06V20/42Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

The invention belongs to technical field of video processing, specially a kind of across camera video method for concentration identified again based on pedestrian.It is as follows:(1)The video captured by multistage difference camera using acquisition is as input, the video after the concentration of final output multistage;(2)Video after the multistage concentration obtained based on video enriching stage, for specific target object, the specific location that the target object occurs in different video is found according to corresponding matching measurement method in video first after concentration, the concentration video clip containing target object is extracted, to obtain the description of target object coherent motion track in more videos.The method of the present invention can not only save a large amount of manual labors, also improve the recognition accuracy of target object to a certain extent.Method proposed by the present invention has important value to practical application.

Description

Across the camera video method for concentration identified again based on pedestrian
Technical field
The invention belongs to technical field of video processing, and in particular to it is a kind of identified again based on pedestrian it is dense across camera video Contracting method.
Background technology
Since last century enters digital times, ten hundreds of monitoring cameras is deployed in such as railway station, flies At each traffic intersection in the transport hubs such as airport and city and 24 hours continuous working conditions are in, the quantity of monitor video is in outburst Property increase trend.In addition monitor video plays more next in the practical applications such as intelligent security, traffic administration and criminal investigation investigation More important role, therefore no matter the succinct monitor video comprising abundant information again all has not storage or checking monitoring video The value that can be despised.
But a large amount of interminable monitor videos have very high requirement to storing for video, often many video councils in reality Since the limitation of memory space is deleted quickly, some is caused to include the video-losing of important information.In addition it browses in video and is permitted Mostly useless information can waste a large amount of human costs, and to video monitoring, personnel bring big inconvenience.Obtain more compact, information The working efficiency that the video of density bigger not only can effectively improve monitoring personnel saves a large amount of human costs also largely It reduces occupying for memory to provide sufficient memory space for more videos and improve information dense degree, to a certain extent also more It is adapted to the explosive growth trend of modern society's information content.It is intended that the compression on video time domain to obtain as far as possible The video concentration technique of the video presentation of higher reduction degree becomes academic circles at present and the focal point of industrial quarters.
Video concentration refers to the compression by carrying out time shaft to video, in a relatively short period of time as far as possible in original video Key detail be described, the redundancy on removal video time domain.It is closeer that video concentration technique enables monitor video have The information content of collection allows user's fast browsing magnanimity monitor video.In addition it can also be provided by the way that video retrieval technology is added Specific positioning of the object in original video in rear video is concentrated, makes concentration video that there is the work(being indexed to former monitor video Energy.The existing method for solving video thickening problem can be divided into following a few classes:F.F.(It is directly extracted in video with fixed ratio A few frames achieve the purpose that concentration), key-frame extraction, both methods all can not preferably preserve the dynamic of object in video State effect.In addition there are arrange associated video clip to shorten the montage method regarding video length.And the present invention carries The video concentration method of different characteristics can be chosen according to different demands by going out frame, to meet different user's need to the full extent It asks.
But usually in practical applications, different occasions has different interest, traditional video concentration side to different objects Method is to all Moving Objects in video(Such as:Pedestrian, vehicle etc.)Impart identical importance, make the video after concentration not compared with Strong specific aim.In addition, traditional video concentration technique is generally used only in one section of video, and often in actual life(Such as: Across camera pedestrian detection, track following etc.), it is also necessary to the target is found between different video according to different detection targets Contact.For example when searching for the event trace of suspect using multistage monitor video, video monitoring personnel are usually not only It needs to monitor the video of single camera shooting and searches suspicion personage, it is also necessary to retrieve the target between different video across camera shooting Action message in the monitoring range of head plays tracking suspect and assists the work solved a case to obtain the coherent movement locus of the target With.And it is far from being enough that this process, which still needs to expend a large amount of manpower and materials and only carry out compression processing to video, therefore to packet Different video containing identical key object, which carries out video concentration, has very high researching value.
Pedestrian's weight identification technology(ReID)It is intended to make up the vision limitation of current fixed camera, intelligence is realized to target Across the camera matching of pedestrian and search function, and intelligence can be widely used in by being combined with technologies such as pedestrian detection, tracking The fields such as energy video monitoring, intelligent security.Using pedestrian weight identification technology thought can across video matching target object, realize Target is effectively and rapidly positioned in multistage video and searches movable information of the target under different scenes, improves practical work Make efficiency.
Existing pedestrian identifies that the research work in field is broadly divided into following two categories again:Feature based representation method[2-3], base In distance metric method[4-5].The method that feature based indicates carries out table by diagnostic characteristics of the extraction with robustness to pedestrian Show, and target object is matched according to feature in different video, the computational complexity of this method is simpler, but effect is not It is ideal.There is the distance metric function of judgement index by learning one based on learning distance metric method to calculate between video to reply The image distance of elephant, distance is less than distance between different object images between making the image of same target, although such methods improve Identification accuracy, but generally require a complicated learning process.In recent years, as the development of deep learning and neural network exist The successful application of computer vision field, pedestrian's weight recognizer based on deep learning[6-7]Also gradually become research hotspot simultaneously Realize better effect.
Invention content
In order to solve above-mentioned problem of the prior art, the purpose of the present invention is to provide it is a kind of based on pedestrian identify again across Camera video method for concentration.The present invention proposes existing pedestrian weight identification technology and video concentration technique being combined, right Target object is found in difference using the thought that pedestrian identifies again on the basis of across the camera video carry out time-domain compression of multistage Matching position in the multistage video of camera shooting realizes the consistently event trace of tracking target object and obtains multistage packet The concentration video across camera containing the target object, not only saves a large amount of manual labors and also improves target to a certain extent The recognition accuracy of object.Technical scheme of the present invention is specifically described as follows.
A kind of across camera video method for concentration identified again based on pedestrian, is as follows:
(1)Video enriching stage
The video captured by multistage difference camera using acquisition is as input, by establishing background to the current scene in video Model is detected moving target, tracks and extracts its movement locus and reconfigures the movement locus of multiple targets, finally Export the video after multistage concentration;
(2)Pedestrian's weight cognitive phase
Video after the multistage concentration obtained based on video enriching stage is primarily based on depth for specific target object The thought of habit is gone the feature and measure of learning objective object using neural network, this is found in video after concentration The specific location that target object occurs in different video extracts the concentration video clip containing target object, last basis The position that the obtained target occurs in multistage video is matched, extracts concentration video clip of the multistage comprising the object to obtain Obtain the description of target object coherent motion track in more videos.
In the present invention, step(2)In, pedestrian's weight cognitive phase, with obtained based on video enriching stage arbitrary two sections across Camera concentrates video as input, and it is right in the arbitrary two sections of videos being matched to be exported by the processing of multilayer convolutional neural networks The appearance position for the target object answered realizes pedestrian's weight identification process end to end.
In the present invention, step(2)In, pedestrian's weight cognitive phase has been arrived using neural network by training Active Learning following Process:It detects the target pedestrian in arbitrary two sections across camera concentration videos automatically first, obtains the bounding box of target pedestrian;With The feature of target frame one skilled in the art is extracted afterwards, then calculates between two sections of videos the distance between feature in associated frame members;Neural network The method that structure learns distance between best measures characteristic automatically, and assign different importance weights for different characteristic so that The distance between similar sample is smaller, and the distance between inhomogeneity sample is larger(The distance between inhomogeneity sample is more than same The distance between class sample);It finally can be most close with it to be found for a certain target pedestrian according to the characteristic distance learnt Correspondence target.
Compared to the prior art, the beneficial effects of the present invention are:
1, the present invention proposes method frame that is a kind of simple and practicable and flexibly building, can effectively concentrate existing video Technology is combined with pedestrian's weight identification technology.Not only reduce the redundancy on video time to the full extent, obtains information More rich compact video is measured, while the movable information that will appear in the same target object of different video is associated, obtained more Concentration video of the section comprising the object movable information realizes the extraction between specific object movement locus across camera.
2, present invention is generally directed to the application fields that the concentration of monitor video and target trajectory extract, directly will be across taking the photograph It is inputted as the different video of head is used as, executes video concentration successively and pedestrian identifies to operate to finally obtain and exist for target object again The concentration video of the sports immunology to link up in different video, by the thought identified again in conjunction with pedestrian realize different video it Between retrieve same target object across the Activity Description in camera video, instead of artificially to the target inspection across camera video Rope has largely liberated human cost and has improved target identification accuracy.
Description of the drawings
Fig. 1 is the flow chart of the present invention.
Fig. 2 is the exemplary video concentration process of the present invention.
Fig. 3 is the exemplary pedestrian's weight identification process of the present invention.
Fig. 4 is using the present invention to being carried out at the video concentration based on pedestrian's redirecting technique across the multistage video of camera Design sketch after reason.
Specific implementation mode
It describes in detail with reference to the accompanying drawings and examples to technical scheme of the present invention.The method of the present invention it is specific Flow is as shown in Figure 1.
(1)Video enriching stage
This stage using the video captured by the multistage difference camera that obtains as input, realizes a letter to video content It is single to summarize, the video after the concentration of final output multistage.This stage can be according to the demand of different application scene using existing each Kind video concentration technique to carry out concentration to the multistage video across camera respectively, has higher flexibility and practicality Property.Current existing video concentration method extracts moving target usually by carrying out Algorithm Analysis to the moving target in video, Then the movement locus of each target is analyzed, different targets is spliced in a common background scene, and will They are combined in some way, generate new concentration rear video.
(2)Pedestrian's weight cognitive phase
This stage is the multistage video after the concentration obtained based on previous stage, for specific target object, first dense The specific location that the target object occurs in different video is found according to corresponding matching measurement method in video after contracting, and Extract the concentration video clip containing target object.Pedestrian's weight identification technology based on video that this stage uses is based on deep Spend study thoughts, using neural network simple structure and facilitate characteristic end to end and fast implement the target across camera With process, the feature and measure of learning objective object are gone using neural network, plays quickly and accurately target retrieval.Finally The position that the target obtained according to matching occurs in multistage video, extracts the concentration video clip that multistage includes the object To obtain the description of target object coherent motion track in more videos.
In step(1)In mainly include the following contents:Background model is established, to moving target to the current scene in video The track for being detected, tracking and extract after its movement locus, the movement locus for reconfiguring multiple targets, fusion recombination and the back of the body Scape model.
Current scene background model is established, by the way that original video is divided into static and dynamic vision frequency range, while each to regard Frequency range generates a unified background model.Moving object detection is detected based on the background modeled using algorithm of target detection And target object is tracked, its movement locus is extracted, which is indicated by movement locus.Then multiple targets are reconfigured Movement locus, the spatial redundancy of video is removed, simultaneously it is also contemplated that during avoiding objective cross during recombination The problems such as intersection-type collision, protects the basic exercise of original object, and it is strange to prevent motion track loss and target object deformation etc. Strange visual effect.Generate concentration video, by after recombination multiple target movement locus and background model merge, synthesize Video after concentration, what this step needed to pay attention between multi-target track and background seamless merges.
Specifically to show that the present invention proposes the implementation method and effect of frame, step of the invention(1)Take inventor it The video concentration method based on line clipping of preceding proposition is example, takes video to concentrate respectively multistage video.Specific concentration stream Journey is referring to document [1].
In step(2)In, by taking a kind of pedestrian based on deep learning knows method for distinguishing again as an example:As shown in figure 3, this stage Using existing neural network structure, two sections of across camera concentration videos that back is obtained are rolled up as input by multilayer The appearance position of corresponding target object, realizes end to end in two sections of videos that the processing output of product neural network is matched to Pedestrian's weight identification process.
The network structure that this stage uses uses multilayer convolutional layer to detect the target pedestrian in two sections of videos automatically first, obtains To the bounding box of target pedestrian.The feature of target frame one skilled in the art then is extracted, then is calculated special in associated frame members between two sections of videos The distance between sign.Neural network structure can learn automatically distance between best measures characteristic method (such as Manhattan away from With a distance from, Euclidean distance and Pasteur etc.), and assign different importance weights for different characteristic so that between similar sample away from From smaller, and the distance between inhomogeneity sample is larger.It finally can be with for a certain target line according to the characteristic distance learnt People finds and its most similar corresponding target.Due to the characteristic of neural network, each specific intermediate convolutional layer can indicate Above-mentioned whole process, thus this stage do not show the above operation of execution, but by network structure Active Learning detailed process, Obtained model can efficiently realize pedestrian's weight identification process.
Fig. 2 is the video concentration process of example selection of the present invention, the specific steps are:
As shown in Fig. 2, by taking a kind of method based on line clipping as an example comprising following steps:
(1)Background modeling carries out scene cut to video, draws according to the static state in video and the different characteristics between dynamic content Separate the unified background model of current scene.
(2)Target trajectory is extracted, based on the background modeled, moving target is detected and tracks target object movement locus, The target object is indicated by movement locus.
(3)Merge target trajectory, the event trace of the multiple moving targets extracted is reconfigured, removes video Spatial redundancy, protect original the problems such as the intersection-type collision being also contemplated that while regrouping process during avoiding objective cross The basic exercise information of beginning target prevents the strange visual effect such as motion track loss and target object deformation.
(4)Concentration video is generated, synthesizes concentration video with background image by splicing target trajectory set, this step needs Pay attention between multi-target track and background seamless merges.
As shown in figure 3, by taking a kind of pedestrian based on deep learning knows method for distinguishing again as an example, it is as follows:
(1)Existing network structure [such as 2,3] is used first, and two sections of across camera concentration videos that back is obtained are as defeated Enter, using the target pedestrian in multilayer convolutional layer automatically two sections of videos of detection, obtains the bounding box of target pedestrian.
(2)Then the feature of target frame one skilled in the art is extracted using convolutional layer, and between two sections of videos of calculating in associated frame members The distance between feature.Neural network structure can learn method (such as Manhattan of distance between best measures characteristic automatically Distance, Euclidean distance and Pasteur's distance etc.), and assign different importance weights for different characteristic so that between similar sample Distance is smaller, and the distance between inhomogeneity sample is larger.
(3)It finally can be to be found and its most similar corresponding mesh for a certain target pedestrian according to the characteristic distance learnt Mark, exports the appearance position of corresponding target object in two sections of videos being matched to.
Fig. 4 illustrates the design sketch of this method:
Figure(a),(b)It is regarded for the concentration that video concentration obtains is used only to two sections of videos captured by different cameras respectively Frequency frame.Figure(c),(d)To apply the corresponding concentration video frame extracted based on the video concentration technique that pedestrian identifies again, wherein The bounding box of same color corresponds respectively to the same target object in two figures in two figures.As can be seen that this method is well The complete object of which movement information saved in video of compression on video time domain is realized, simultaneously effective detects that two sections regard Between frequency the movement correspondence of target object and accurately extract target object across video movement locus.
Bibliography
[1] a kind of video concentration methods based on line clipping of Yan Bo, Xue Xiangyang, Li Ke, Wang Wei Yi:, CN 103763562 A [P]. 2014.
[2]Farenzena M, Bazzani L, Perina A, et al. Person re-identification by symmetry-driven accumulation of local features[C]// Computer Vision and Pattern Recognition. IEEE, 2010:2360-2367.
[3]Dong S C, Cristani M, Stoppa M, et al. Custom Pictorial Structures for Re-identification[J]. 2011:68.1-68.11.
[4]Xing E P, Ng A Y, Jordan M I, et al. Distance metric learning, with application to clustering with side-information[C]// International Conference on Neural Information Processing Systems. MIT Press, 2002:521-528.
[5]Zheng W S, Gong S, Xiang T. Person re-identification by probabilistic relative distance comparison[C]// Computer Vision and Pattern Recognition. IEEE, 2011:649-656.
[6] Rui Zhao,Wanli Oyang,Xiaogang Wang. Person re-identification by saliency learning[J]. IEEE transactions on pattern analysis and machine intelligence,2017. 39(2):356–370.
[7] Niall McLaughlin,Jesus Martinez del Rincon,Paul Miller. Recurrent convolutional network for videobased person re-identification[C]//Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2016:1325– 1334。

Claims (3)

1. a kind of across camera video method for concentration identified based on pedestrian again, which is characterized in that be as follows:
(1)Video enriching stage
The video captured by multistage difference camera using acquisition is as input, by establishing background to the current scene in video Model is detected moving target, tracks and extracts its movement locus and reconfigures the movement locus of multiple targets, finally Export the video after multistage concentration;
(2)Pedestrian's weight cognitive phase
Video after the multistage concentration obtained based on video enriching stage is primarily based on depth for specific target object The thought of habit is gone the feature and measure of learning objective object using neural network, this is found in video after concentration The specific location that target object occurs in different video extracts the concentration video clip containing target object, last basis The position that the obtained target occurs in multistage video is matched, extracts concentration video clip of the multistage comprising the object to obtain Obtain the description of target object coherent motion track in more videos.
2. according to the method described in claim 1, it is characterized in that, step(2)In, pedestrian's weight cognitive phase, to be based on video Arbitrary two sections across the camera concentration videos that enriching stage obtains are exported as input by the processing of multilayer convolutional neural networks The appearance position of corresponding target object in the arbitrary two sections of videos being matched to realizes pedestrian's weight identification process end to end.
3. according to the method described in claim 1, it is characterized in that, step(2)In, pedestrian's weight cognitive phase uses neural network Following procedure has been arrived by training Active Learning:Detect the target line in arbitrary two sections across camera concentration videos automatically first People obtains the bounding box of target pedestrian;The feature of target frame one skilled in the art then is extracted, then calculates associated frame members between two sections of videos The distance between interior feature;The method that neural network structure learns distance between best measures characteristic automatically, and be different spies Sign assigns different importance weights so that the distance between similar sample is smaller, and the distance between inhomogeneity sample is larger;Most It afterwards can be to be found and its most similar corresponding target for a certain target pedestrian according to the characteristic distance learnt.
CN201810584488.5A 2018-06-08 2018-06-08 Across the camera video method for concentration identified again based on pedestrian Pending CN108769598A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810584488.5A CN108769598A (en) 2018-06-08 2018-06-08 Across the camera video method for concentration identified again based on pedestrian

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810584488.5A CN108769598A (en) 2018-06-08 2018-06-08 Across the camera video method for concentration identified again based on pedestrian

Publications (1)

Publication Number Publication Date
CN108769598A true CN108769598A (en) 2018-11-06

Family

ID=63999423

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810584488.5A Pending CN108769598A (en) 2018-06-08 2018-06-08 Across the camera video method for concentration identified again based on pedestrian

Country Status (1)

Country Link
CN (1) CN108769598A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583373A (en) * 2018-11-29 2019-04-05 成都索贝数码科技股份有限公司 A kind of pedestrian identifies implementation method again
CN110267008A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN110996072A (en) * 2019-03-11 2020-04-10 南昌工程学院 Multi-source information fusion system and working method thereof
CN111263955A (en) * 2019-02-28 2020-06-09 深圳市大疆创新科技有限公司 Method and device for determining movement track of target object
CN112288865A (en) * 2019-07-23 2021-01-29 比亚迪股份有限公司 Map construction method, device, equipment and storage medium
CN112492209A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Shooting method, shooting device and electronic equipment
CN112711966A (en) * 2019-10-24 2021-04-27 阿里巴巴集团控股有限公司 Video file processing method and device and electronic equipment

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
CN105354548A (en) * 2015-10-30 2016-02-24 武汉大学 Surveillance video pedestrian re-recognition method based on ImageNet retrieval
US20160277190A1 (en) * 2015-03-19 2016-09-22 Xerox Corporation One-to-many matching with application to efficient privacy-preserving re-identification
CN106778464A (en) * 2016-11-09 2017-05-31 深圳市深网视界科技有限公司 A kind of pedestrian based on deep learning recognition methods and device again
CN108108662A (en) * 2017-11-24 2018-06-01 深圳市华尊科技股份有限公司 Deep neural network identification model and recognition methods

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103686095A (en) * 2014-01-02 2014-03-26 中安消技术有限公司 Video concentration method and system
US20160277190A1 (en) * 2015-03-19 2016-09-22 Xerox Corporation One-to-many matching with application to efficient privacy-preserving re-identification
CN105354548A (en) * 2015-10-30 2016-02-24 武汉大学 Surveillance video pedestrian re-recognition method based on ImageNet retrieval
CN106778464A (en) * 2016-11-09 2017-05-31 深圳市深网视界科技有限公司 A kind of pedestrian based on deep learning recognition methods and device again
CN108108662A (en) * 2017-11-24 2018-06-01 深圳市华尊科技股份有限公司 Deep neural network identification model and recognition methods

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109583373A (en) * 2018-11-29 2019-04-05 成都索贝数码科技股份有限公司 A kind of pedestrian identifies implementation method again
CN109583373B (en) * 2018-11-29 2022-08-19 成都索贝数码科技股份有限公司 Pedestrian re-identification implementation method
CN111263955A (en) * 2019-02-28 2020-06-09 深圳市大疆创新科技有限公司 Method and device for determining movement track of target object
WO2020172870A1 (en) * 2019-02-28 2020-09-03 深圳市大疆创新科技有限公司 Method and apparatus for determining motion trajectory of target object
CN110996072A (en) * 2019-03-11 2020-04-10 南昌工程学院 Multi-source information fusion system and working method thereof
CN110267008A (en) * 2019-06-28 2019-09-20 Oppo广东移动通信有限公司 Image processing method, device, server and storage medium
CN110267008B (en) * 2019-06-28 2021-10-22 Oppo广东移动通信有限公司 Image processing method, image processing apparatus, server, and storage medium
CN110378931A (en) * 2019-07-10 2019-10-25 成都数之联科技有限公司 A kind of pedestrian target motion track acquisition methods and system based on multi-cam
CN112288865A (en) * 2019-07-23 2021-01-29 比亚迪股份有限公司 Map construction method, device, equipment and storage medium
CN112711966A (en) * 2019-10-24 2021-04-27 阿里巴巴集团控股有限公司 Video file processing method and device and electronic equipment
CN112711966B (en) * 2019-10-24 2024-03-01 阿里巴巴集团控股有限公司 Video file processing method and device and electronic equipment
CN112492209A (en) * 2020-11-30 2021-03-12 维沃移动通信有限公司 Shooting method, shooting device and electronic equipment

Similar Documents

Publication Publication Date Title
CN108769598A (en) Across the camera video method for concentration identified again based on pedestrian
CN106354816B (en) video image processing method and device
CN103824070B (en) A kind of rapid pedestrian detection method based on computer vision
Nandhini et al. CNN Based Moving Object Detection from Surveillance Video in Comparison with GMM
CN108073929A (en) Object detecting method and equipment based on dynamic visual sensor
CN108256439A (en) A kind of pedestrian image generation method and system based on cycle production confrontation network
US20030033318A1 (en) Instantly indexed databases for multimedia content analysis and retrieval
CN105227907B (en) Unsupervised anomalous event real-time detection method based on video
CN105631430A (en) Matching method and apparatus for face image
CN107153824A (en) Across video pedestrian recognition methods again based on figure cluster
CN104504395A (en) Method and system for achieving classification of pedestrians and vehicles based on neural network
CN107657232B (en) Pedestrian intelligent identification method and system
CN113963315A (en) Real-time video multi-user behavior recognition method and system in complex scene
CN111753601B (en) Image processing method, device and storage medium
CN109271927A (en) A kind of collaboration that space base is multi-platform monitoring method
CN113792606A (en) Low-cost self-supervision pedestrian re-identification model construction method based on multi-target tracking
CN109948474A (en) AI thermal imaging all-weather intelligent monitoring method
CN112465854A (en) Unmanned aerial vehicle tracking method based on anchor-free detection algorithm
Atikuzzaman et al. Human activity recognition system from different poses with cnn
CN113887469A (en) Method, system and storage medium for pedestrian fall detection
CN113920585A (en) Behavior recognition method and device, equipment and storage medium
CN202121706U (en) Intelligent personnel monitoring system
CN104504162B (en) A kind of video retrieval method based on robot vision platform
CN111008601A (en) Fighting detection method based on video
CN112906679B (en) Pedestrian re-identification method, system and related equipment based on human shape semantic segmentation

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20181106

WD01 Invention patent application deemed withdrawn after publication