CN105528364A - Iterative video image retrieval method and device - Google Patents

Iterative video image retrieval method and device Download PDF

Info

Publication number
CN105528364A
CN105528364A CN201410520736.1A CN201410520736A CN105528364A CN 105528364 A CN105528364 A CN 105528364A CN 201410520736 A CN201410520736 A CN 201410520736A CN 105528364 A CN105528364 A CN 105528364A
Authority
CN
China
Prior art keywords
retrieval
iteration
clue
iterative
searching
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201410520736.1A
Other languages
Chinese (zh)
Inventor
杨鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hitachi Ltd
Original Assignee
Hitachi Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hitachi Ltd filed Critical Hitachi Ltd
Priority to CN201410520736.1A priority Critical patent/CN105528364A/en
Publication of CN105528364A publication Critical patent/CN105528364A/en
Pending legal-status Critical Current

Links

Abstract

The invention provides an iterative video image retrieval method and device. In the iterative video image retrieval method, an iterative retrieval engine is adopted and a method of multiple iterations can be used to gradually cover all possible correct retrieval results. In the iterative retrieval step of each layer, proper candidate feature vectors are selected and recorded as an intermediate result and the intermediate feature vectors are used as the starting point in the iteration of the next layer for further performing an iterative retrieval step. With the method, the adverse impact of various kinds of interference in a large-scale video monitoring system on retrieval results can be effectively reduced, the retrieval precision can be improved and the support capability of video retrieval for large-scale monitoring service is improved.

Description

Iteration video pictures search method and device
Technical field
The present invention relates to a kind of iteration video pictures search method and device, more specifically, relate to the iteration video pictures search method in a kind of extensive video monitoring system and device.
Background technology
Video monitoring system is widely used in social security protection.In each city, hundreds thousand of camera is often had to be deployed every year.In the middle of China's extensive supervisory system in existing city, large traffic monitoring device is disposed to some extent, comprises camera, coding/decoding apparatus, digital video recorder, storage and playback system etc.Wherein, the camera of quite a few all meets 720P or 1080P high definition standard.In the process that every day is run in video monitoring system, can produce a large amount of video datas, these video datas are retained the time do not waited for 3 months to 1 year by the Operational requirements of video monitoring system simultaneously.The video data of these accumulation, not only takies very large storage space and the network bandwidth, and analyzes and tissue for video content is not effective.When needs search required target object in massive video content (as face, car, not clear luggage etc.), need operating personnel to spend a large amount of time, manually watch investigation.Doing so not only can the manpower of at substantial and time, the precision of searching also can be made to reduce a lot.The more important thing is, at public safety field, it is very valuable that the clue after case occurs traces the time, if infotech can be utilized to significantly improve the investigation speed of video monitoring content, this all has very important significance for citizen's safety and social stability.
In the last few years, some large-scale video analysis and searching system were adopted use gradually.This type systematic carries out real-time or online analysis to video content, and the description (being generally proper vector) of the corresponding sensitive features (such as face, vehicle etc.) obtained after analysis is stored in the middle of database.When user needs to investigate for certain sensitive features time, can sensitive features be input in the middle of database, investigation of comparing, thus can these sensitive features of quick position or target object situation about occurring in massive video content.Do like this, greatly can improve in massive video content the time of searching needed for target object on the one hand, save man power and material; On the other hand, the situation of searching precision reduction brought because personnel are tired can be avoided.In the process of content based video retrieval system, generally the feature interpretation of target object vector is compared with the proper vector in the characteristic corresponding to massive video content, calculate the similarity (being generally the difference between proper vector) between proper vector, finally the result set of retrieval is presented to user terminal according to the order of similarity (difference of proper vector is ascending) from high to low.
In small-scale video surveillance applications system, above method can support the video frequency searching of high-speed, high precision.But, in extensive video surveillance applications system, the simple reduction adopting such search method can cause retrieval precision.Main cause is in a large amount of video contents, and same target object or sensitive features are when different cameras, different location, different time occur, the proper vector shown has little difference.Due to many-sided impacts such as illumination, angle, environment, the proper vector of other these target objects non-may represent better similarity.Meanwhile, target object may occur when certain when and where is concentrated long, may concentrate like this and represent on a user interface, and this target object is not effectively represented in the appearance situation in other times and place in single retrieval.Therefore, need, for extensive video monitoring scene, be optimized video frequency searching and promote.
A few thing before has related to the content of this respect.
A kind of architecture design of extensive video frequency searching is described in patent documentation 1.In this cover framework, how Special attention will be given to uses parallel computation to accelerate the flow process of retrieval, but how to solve the problem of the video frequency searching precision in extensive video monitoring system, does not have effective means.
A kind of method utilizing text marking and content-based feature to carry out video frequency searching is described in patent documentation 2.This method uses text marking and contrast in conjunction with together with content based video retrieval system simply, improves retrieval precision.But in extensive video monitoring system, comprehensive text marking is the thing that impossible complete.Meanwhile, considerable target object and sensitive features cannot characterize with text.
A kind of video metadata retrieval of multiple resource and relevant business framework is proposed in patent documentation 3.Here, more mention and how to use multiple markup information to carry out business and the method for video frequency searching, not mention the precision how promoting content based video retrieval system.In extensive video monitoring system, this method cannot ensure high-precision result for retrieval.
A kind of equipment supporting Video content retrieval is proposed in patent documentation 4.In the device, the content of Audio and Video can be carried out mixed index, thus improve precision.But how in extensive video monitoring system, the retrieval being only suitable for video content does not have effective means to improve precision.
Prior art document
Patent documentation
Patent documentation 1: US Patent No. 2014/0164406A1
Patent documentation 2: patent WO2014/082288
Patent documentation 3: US Patent No. 2013/0326573A1
Patent documentation 4: US Patent No. 2014/0188834
Summary of the invention
The present invention researches and develops in view of the above problems, and object is to propose a kind of iteration video pictures search method and device, in extensive video monitoring system, effectively can get rid of relevant interference, promote the precision of retrieval.
One aspect of the present invention relates to a kind of iteration video pictures search method, it is characterized in that, obtains result for retrieval by successive ignition, comprising: iterative strategy setting procedure, the setting iteration degree of depth, the retrieval clue number in every stacking generation; Retrieval clue input step, according to the described retrieval clue number set by described iterative strategy setting procedure, inputs the proper vector of the feature for describing target object retrieved first from user side, as retrieval clue collection; Searching step, record retrieval starting point, and the proper vector of each retrieval clue retrieval clue concentrated sends to retrieval server to retrieve; Export step, receive the set of eigenvectors of result for retrieval from described retrieval server, and outputted to user side; Iteration degree of depth determining step, judges whether current searching step has reached the iteration degree of depth set by described iterative strategy setting procedure; Intermediate result determining step, when being judged as not reaching the set iteration degree of depth in described iteration degree of depth determining step, according to the described retrieval clue number set by described iterative strategy setting procedure, inquiry iteration record storehouse, select the retrieval clue collection determining next step iterative searching, perform described searching step; Iteration record storehouse step of updating, when being judged as reaching the set iteration degree of depth in described iteration degree of depth determining step, is recorded to iteration record storehouse user after completing the mark of result for retrieval; Iteration progress determining step, judges the retrieval whether having completed all retrieval clues that retrieval clue is concentrated in the searching step of each step iteration; Retrieval clue step of updating, when being judged as the retrieval not completing all retrieval clues that retrieval clue is concentrated by described iteration progress determining step, by the retrieval clue that nearest on trie tree does not complete, as the retrieval clue of next iterative searching, perform described searching step; And end step, when being judged as the retrieval completing all retrieval clues that retrieval clue is concentrated by described iteration progress determining step, finishing iteration video pictures retrieval process.
In addition, preferred in the present invention: in described intermediate result determining step, after inquiring about iteration record storehouse thus select to determine the retrieval clue collection of next step iterative searching, the retrieval clue collection of next step selected iterative searching is outputted to user side, and the retrieval clue collection of next step iterative searching utilizing user to select, performs described searching step.
In addition, in the present invention preferably: the retrieval clue that retrieval clue is concentrated, start to perform described searching step according to the next one retrieval clue of order on iteration tree of first sequence, middle sequence or postorder.
In addition, preferred in the present invention: the proper vector in the characteristic corresponding to the proper vector of target object and massive video content is compared, calculate the similarity between proper vector, the set of eigenvectors of result for retrieval is presented in user side according to similarity order from high to low.
In addition, preferred in the present invention: in described intermediate result determining step, select by user the retrieval clue collection determining next step iterative searching, or automatically select according to the reference information in iteration record storehouse the retrieval clue collection determining next step iterative searching.
In addition, in the present invention preferably: described reference information, determined by the relation between each object vector of setting up based on the evaluation information of user in the retrieval before this retrieval multiplexing.
In addition, in the present invention preferably: described reference information, the reference historical feedback vector by utilizing large data analysis tool to obtain under varying environment between object vector is determined.
Another aspect of the present invention relates to a kind of iteration video pictures indexing unit, it is characterized in that, obtains result for retrieval by successive ignition, comprising: iterative strategy setup unit, the setting iteration degree of depth, the retrieval clue number in every stacking generation; Retrieval clue input block, according to the described retrieval clue number set by described iterative strategy setup unit, inputs the proper vector of the feature for describing target object retrieved first from user side, as retrieval clue collection; Retrieval unit, record retrieval starting point, and the proper vector of each retrieval clue retrieval clue concentrated sends to retrieval server to retrieve; Output unit, receives the set of eigenvectors of result for retrieval, and is outputted to user side from described retrieval server; Iteration degree of depth judging unit, judges whether current retrieval unit has reached the iteration degree of depth set by described iterative strategy setup unit; Intermediate result determining unit, when being judged as not reaching the set iteration degree of depth in described iteration degree of depth judging unit, according to the described retrieval clue number set by described iterative strategy setup unit, inquiry iteration record storehouse, select the retrieval clue collection determining next step iterative searching, call described retrieval unit; Iteration record storehouse updating block, when being judged as reaching the set iteration degree of depth in described iteration degree of depth judging unit, is recorded to iteration record storehouse user after completing the mark of result for retrieval; Iteration progress judging unit, judges whether the retrieval unit of each step iteration has completed the retrieval of all retrieval clues that retrieval clue is concentrated; Retrieval clue updating block, when being judged as the retrieval not completing all retrieval clues that retrieval clue is concentrated by described iteration progress judging unit, by the retrieval clue that nearest on trie tree does not complete, as the retrieval clue of next iterative searching, call described retrieval unit; And end unit, when being judged as the retrieval completing all retrieval clues that retrieval clue is concentrated by described iteration progress judging unit, finishing iteration video pictures retrieval process.
Invention effect
According to the present invention, in extensive video monitoring system, adopt new iterative searching engine, the method for successive ignition can be used, progressively cover all possible correct result for retrieval.In the iterative searching step of every one deck, select and record the result of suitable candidate feature vector as centre, being starting point with median feature vector in the iteration of lower one deck afterwards, carrying out iterative searching step further.By such method, effectively can reduce the negative effect of the various interference in extensive video monitoring system for result for retrieval, promote retrieval precision, improve video frequency searching to the supporting capacity of extensive monitoring business.
Accompanying drawing explanation
Accompanying drawing is used for doing further understanding to the present invention, forms a part for instructions, is used from and carries out detailed explanation to the present invention, be not construed as limiting the invention with preferred embodiment one.Wherein:
Fig. 1 is the figure of the type representing existing video monitoring system.
Fig. 2 is the figure of the another kind of type representing existing video monitoring system.
Fig. 3 is the figure of the typical structure represented in Surveillance center 4.
Fig. 4 is the figure of the typical structure represented in Surveillance center 8.
Fig. 5 is the figure representing existing field of video monitoring extensive Video content retrieval system Problems existing.
Fig. 6 is the figure of a kind of typical module structure representing retrieval server 10 in the present invention in Surveillance center 4/8 and service server 11.
Fig. 7 is the figure of the another kind of typical module structure representing retrieval server 10 in the present invention in Surveillance center 4/8 and service server 11.
Fig. 8 is the figure of the main thought representing iterative searching method in the present invention.
Fig. 9 is the process flow diagram of the inside main flow representing iterative searching engine 12 in the present invention.
Figure 10 is the figure representing Major Systems flow process of the present invention.
Figure 11 is the figure representing typical apply situation of the present invention and flow process.
Embodiment
Come below with reference to the accompanying drawings to be described in detail to embodiments of the present invention, but the present invention is not limited to this embodiment.In addition, in following description of the present invention, the specific descriptions to known function and configuration will be omitted, to avoid making theme of the present invention unclear.
Fig. 1 is the figure of the type representing existing video monitoring system.Fig. 2 is the figure of the another kind of type representing existing video monitoring system.Fig. 3 is the figure of the typical structure represented in Surveillance center 4.Fig. 4 is the figure of the typical structure represented in Surveillance center 8.
As shown in Figure 1, in the system architecture of current a kind of intelligent monitor system, comprise multiple monitoring camera 1 and multiple front-end server 2 at system front end to I haven't seen you for ages.Wherein each front-end server 2 can support multiple monitoring camera 1 simultaneously.According to the difference of the function of supported intelligent video analysis, the number of the monitoring camera 1 that each front-end server 2 is supported is different.Front-end server 2 is carried out with Surveillance center 4 by dedicated network or internet 3 alternately.Manage whole system in Surveillance center 4, have storage center (video library) 5 in its back-end, store all monitor videos here.Monitor terminal 7 can carry out monitoring work at place of Surveillance center 4, or monitor terminal 6 is remotely logged into Surveillance center 4 by network 3 and carries out monitoring work.
As shown in Figure 2, in the system architecture of current another kind of intelligent monitor system, comprise multiple monitoring camera 1 at system front end to I haven't seen you for ages, multi-channel video is sent to Surveillance center 8 by dedicated network or internet 3.Can select whether to carry out intelligent video analysis in Surveillance center 8.Have storage center (video library) 5 in its back-end equally, store all monitor videos here.Monitor terminal 7 can carry out monitoring work at place of Surveillance center 8, or monitor terminal 6 is remotely logged into Surveillance center 8 by network 3 and carries out monitoring work.With the system architecture shown in Fig. 1 unlike, this framework does not comprise front-end server 2.Although this mode can save a lot of front-end server, if Surveillance center 8 will carry out intelligent video analysis and search, required for the resource that expends and search delay can be larger than the system architecture shown in Fig. 1.
Fig. 3 describes the video monitoring system for Fig. 1, supports the typical architecture of Surveillance center 4 inside of content-based video pictures search function.Comprising the service server 11 completing elementary video monitoring function, and complete the retrieval server 10 of video and picture retrieval function.This two classes server, in extensive video monitoring system, generally occurs with the form of cluster, the multiple servers in cluster in a distributed manner or centralization form dispose.
Fig. 4 describes the video monitoring system for Fig. 2, supports the typical architecture of Surveillance center 8 inside of content-based video pictures search function.Comprising the service server 11 completing elementary video monitoring function, and complete the retrieval server 10 of video and picture retrieval function.Due to the function not having front-end server to complete video analysis, Surveillance center 8 inside needs to support Analysis server 9.This three classes server, in extensive video monitoring system, generally occurs with the form of cluster, the multiple servers in cluster in a distributed manner or centralization form dispose.
Fig. 5 is the figure representing existing field of video monitoring extensive Video content retrieval system Problems existing.As shown in Figure 5, in the extensive Video content retrieval system of existing field of video monitoring, real-time or online analysis is carried out to video content, and the description (being generally proper vector) of the corresponding sensitive features (such as face, vehicle etc.) obtained after analysis is stored in the middle of database.When user needs to investigate for certain sensitive features time, can sensitive features be input in the middle of database, investigation of comparing, thus can these sensitive features of quick position or target object situation about occurring in massive video content.In the process of content based video retrieval system, generally the feature interpretation of target object vector is compared with the proper vector in the characteristic corresponding to massive video content, calculate the similarity (being generally the difference between proper vector) between proper vector, finally the result set of retrieval is presented to user terminal according to the order of similarity (difference of proper vector is ascending) from high to low.But, in extensive video surveillance applications system, the simple reduction adopting such search method can cause retrieval precision.Main cause is in a large amount of video contents, and same target object or sensitive features are when different cameras, different location, different time occur, the proper vector shown has little difference.Due to many-sided impacts such as illumination, angle, environment, the proper vector of other these target objects non-may represent better similarity (interference type two).Simultaneously, target object may occur when certain when and where is concentrated long, may concentrate in single retrieval like this and represent on a user interface, and this target object is not effectively represented (interference type one) in the appearance situation in other times and place.
Fig. 6 is the figure of a kind of typical module structure representing retrieval server 10 in the present invention in Surveillance center 4/8 and service server 11.Fig. 7 is the figure of the another kind of typical module structure representing retrieval server 10 in the present invention in Surveillance center 4/8 and service server 11.
As shown in Figure 6, in the framework described by Fig. 3 or Fig. 4, service server 11 internal support is for supporting the iterative searching engine 13 of iterative searching method proposed by the invention.The iteration record storehouse 12 for supporting iterative searching method proposed by the invention is supported in the middle of retrieval server.
As shown in Figure 7, in the framework described by Fig. 3 or Fig. 4, in order to support iterative searching method proposed by the invention, in service server 11, support iterative searching engine 13 and iteration record storehouse 12.
Fig. 8 is the figure of the main thought representing iterative searching method in the present invention.As shown in Figure 8, in the method for the invention, have employed new iterative searching engine, the method for successive ignition can be used, progressively cover all possible correct result for retrieval.In the iterative searching step of every one deck, select and record the result of suitable candidate feature vector as centre, being starting point with median feature vector in the iteration of lower one deck afterwards, carrying out iterative searching step further.By such method, effectively can reduce the negative effect of the various interference in extensive video monitoring system for result for retrieval, promote retrieval precision, improve video frequency searching to the supporting capacity of extensive monitoring business.In the process of iterative searching, the selection of intermediate result can rely on the subjectivity of external user to select, and the reference information in iteration record storehouse also can be relied on automatically to select.These reference informations in a lot of retrieval flows simply before this retrieval multiplexing, based on the relation between each object vector that the evaluation information of user is set up, can be in optimized selection; Also can pass through large data analysis tool (as hadoop), analyze the reference historical feedback vector under varying environment between object vector, thus instruct the optimum choice of each intermediate result based on these feedback vectors further.
Fig. 9 is the process flow diagram of the inside main flow representing iterative searching engine 12 in the present invention.As shown in Figure 9, initiate after the start 901 before retrieval flow, first iterative searching engine 12 presets iterative searching strategy 902: at least include but not limited to the iteration degree of depth, every stacking generation candidate thread the key configuration parameters such as number.After retrieval flow is initiated in user side, first iterative searching engine 12 receives the proper vector 903 retrieving clue first from user side.Afterwards, record retrieval starting point, and the proper vector 904 sending retrieval clue to corresponding retrieval server 10.After retrieval server 10 completes this retrieval, receive the set of eigenvectors of result for retrieval from retrieval server 10, and alternatively result will be pushed to user side 905.Then, need to judge whether current searching step has reached the maximum iterative searching degree of depth 906, if reached the maximum iterative searching degree of depth, then in 910 steps, user records iteration record storehouse 13 after completing the mark of result for retrieval, before judging afterwards, whether all candidate thread of each step iteration all complete 911, if all completed, then terminate current iterative searching.If all do not completed, return back to a candidate thread do not completed nearest on trie tree, as the clue 912 of next iterative searching, get back to step 903 and proceed iterative searching.If being judged as NO of step 906, then inquire about iteration record storehouse 11, select the recommendation collection 907 determining next step iterative searching clue vector.Afterwards, alternatively the recommendation collection of next step iterative searching clue vector to be labeled on User Page 908, the clue vector set of next step iterative searching can be selected by user, and a proper vector node nearest from iteration tree initiates retrieval 909, then gets back to step 903 and proceeds iterative searching.
Wherein, a retrieval clue nearest on trie tree, can refer to the next one retrieval clue of order on iteration tree according to first sequence, middle sequence or postorder.
Figure 10 is the figure representing Major Systems flow process of the present invention.First realize setting up search strategy 1001 in iterative searching engine 13, at least include but not limited to the iteration degree of depth, every stacking generation candidate thread the key configuration parameters such as number.After initiation retrieval flow, monitor terminal submits retrieving image 1002 to service server 11, and obtains possible proper vector 1003, and these two steps are optional flow process in practical business flow process.Afterwards, monitor terminal initiates retrieval flow 1004, selects to submit searching characteristic vector to and complete to retrieve first, and this retrieval signaling is by service server 11, and iterative searching engine 13 is forwarded to retrieval server 10.After completing this retrieval, monitor terminal 7 receives result for retrieval 1005 by service server 11 and iterative searching engine 1.Afterwards, monitor terminal, according to the feedback of result, selects candidate's picture or the proper vector 1006 of iterative searching, continues to initiate iterative searching 1007.After next step candidate 1008 retrieved determined by iterative searching engine, upgrade iteration record 1009, and to the process of retrieval server successively loop unrolling iterative searching, repeatedly perform 1010 to 1012.Wherein, if setting user can submit immediate feedback to, then complete in 1011 steps.After completing iterative searching, iterative searching engine 13 can inquire about the historical record 1013 and 1014 in iteration record storehouse 12, the queueing discipline of reformation result for retrieval.Afterwards, result for retrieval is carried out feedback 1015 to monitor terminal 7.User can carry out judgement mark to the correctness of result for retrieval at monitor terminal 7, and submits the result 1016 of mark to iterative searching engine.Finally, these results to be recorded in the middle of iteration record storehouse 12 1017.
Figure 11 is the figure representing typical apply situation of the present invention and flow process.In this example, have employed typical architecture employed in figure 2, the camera of front end safeguards and Analysis server 19 is all placed in the middle of front-end server 2.Support iterative searching engine 13 in service server 11, in the middle of retrieval server, support iteration record storehouse.The first analyzed server 12 of video content from a large amount of camera 1 processes, and then the proper vector of result is registered in the middle of retrieval server 10.The GUI of the GUI that the process of iterative searching and candidate list are provided by service server 11 or monitor terminal 7 represents to user.In typical iterative searching step example, suppose that the iteration degree of depth is 2.So the degree of depth of iterative searching and the candidate of each step are by composition iterative searching number, and system progressively completes iterative searching flow process under the supervision of user or under not having the supervision of user.Whole flow process and the demarcation of result is recorded in the middle of iteration record storehouse 12.
Above embodiments of the present invention and specific embodiment are illustrated, but the present invention is not defined in above-mentioned embodiment.In addition, for above-mentioned embodiment, do not departing from the scope of the meaning shown in the word described in purport of the present invention and claims, implementing the various distortion that can expect of those skilled in the art and obtain variation and be also contained in the present invention.

Claims (8)

1. an iteration video pictures search method, is characterized in that, obtains result for retrieval by successive ignition, comprising:
Iterative strategy setting procedure, the setting iteration degree of depth, the retrieval clue number in every stacking generation;
Retrieval clue input step, according to the described retrieval clue number set by described iterative strategy setting procedure, inputs the proper vector of the feature for describing target object retrieved first from user side, as retrieval clue collection;
Searching step, record retrieval starting point, and the proper vector of each retrieval clue retrieval clue concentrated sends to retrieval server to retrieve;
Export step, receive the set of eigenvectors of result for retrieval from described retrieval server, and outputted to user side;
Iteration degree of depth determining step, judges whether current searching step has reached the iteration degree of depth set by described iterative strategy setting procedure;
Intermediate result determining step, when being judged as not reaching the set iteration degree of depth in described iteration degree of depth determining step, according to the described retrieval clue number set by described iterative strategy setting procedure, inquiry iteration record storehouse, select the retrieval clue collection determining next step iterative searching, perform described searching step;
Iteration record storehouse step of updating, when being judged as reaching the set iteration degree of depth in described iteration degree of depth determining step, is recorded to iteration record storehouse user after completing the mark of result for retrieval;
Iteration progress determining step, judges the retrieval whether having completed all retrieval clues that retrieval clue is concentrated in the searching step of each step iteration;
Retrieval clue step of updating, when being judged as the retrieval not completing all retrieval clues that retrieval clue is concentrated by described iteration progress determining step, by the retrieval clue that nearest on trie tree does not complete, as the retrieval clue of next iterative searching, perform described searching step; And
End step, when being judged as the retrieval completing all retrieval clues that retrieval clue is concentrated by described iteration progress determining step, finishing iteration video pictures retrieval process.
2. iteration video pictures search method according to claim 1, is characterized in that,
In described intermediate result determining step, after inquiring about iteration record storehouse thus select to determine the retrieval clue collection of next step iterative searching, the retrieval clue collection of next step selected iterative searching is outputted to user side, and the retrieval clue collection of next step iterative searching utilizing user to select, performs described searching step.
3. iteration video pictures search method according to claim 2, is characterized in that,
For the retrieval clue that retrieval clue is concentrated, start to perform described searching step according to the next one retrieval clue of order on iteration tree of first sequence, middle sequence or postorder.
4. iteration video pictures search method according to claim 1, is characterized in that,
Proper vector in characteristic corresponding to the proper vector of target object and massive video content is compared, calculates the similarity between proper vector, the set of eigenvectors of result for retrieval is presented in user side according to similarity order from high to low.
5. iteration video pictures search method according to claim 1, is characterized in that,
In described intermediate result determining step, select by user the retrieval clue collection determining next step iterative searching, or automatically select according to the reference information in iteration record storehouse the retrieval clue collection determining next step iterative searching.
6. iteration video pictures search method according to claim 5, is characterized in that,
Described reference information, the relation between each object vector set up by the evaluation information based on user in the retrieval before this retrieval multiplexing is determined.
7. iteration video pictures search method according to claim 5, is characterized in that,
Described reference information, is determined by the reference historical feedback vector utilizing large data analysis tool to obtain under varying environment between object vector.
8. an iteration video pictures indexing unit, is characterized in that, obtains result for retrieval by successive ignition, comprising:
Iterative strategy setup unit, the setting iteration degree of depth, the retrieval clue number in every stacking generation;
Retrieval clue input block, according to the described retrieval clue number set by described iterative strategy setup unit, inputs the proper vector of the feature for describing target object retrieved first from user side, as retrieval clue collection;
Retrieval unit, record retrieval starting point, and the proper vector of each retrieval clue retrieval clue concentrated sends to retrieval server to retrieve;
Output unit, receives the set of eigenvectors of result for retrieval, and is outputted to user side from described retrieval server;
Iteration degree of depth judging unit, judges whether current retrieval unit has reached the iteration degree of depth set by described iterative strategy setup unit;
Intermediate result determining unit, when being judged as not reaching the set iteration degree of depth in described iteration degree of depth judging unit, according to the described retrieval clue number set by described iterative strategy setup unit, inquiry iteration record storehouse, select the retrieval clue collection determining next step iterative searching, call described retrieval unit;
Iteration record storehouse updating block, when being judged as reaching the set iteration degree of depth in described iteration degree of depth judging unit, is recorded to iteration record storehouse user after completing the mark of result for retrieval;
Iteration progress judging unit, judges whether the retrieval unit of each step iteration has completed the retrieval of all retrieval clues that retrieval clue is concentrated;
Retrieval clue updating block, when being judged as the retrieval not completing all retrieval clues that retrieval clue is concentrated by described iteration progress judging unit, by the retrieval clue that nearest on trie tree does not complete, as the retrieval clue of next iterative searching, call described retrieval unit; And
End unit, when being judged as the retrieval completing all retrieval clues that retrieval clue is concentrated by described iteration progress judging unit, finishing iteration video pictures retrieval process.
CN201410520736.1A 2014-09-30 2014-09-30 Iterative video image retrieval method and device Pending CN105528364A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201410520736.1A CN105528364A (en) 2014-09-30 2014-09-30 Iterative video image retrieval method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201410520736.1A CN105528364A (en) 2014-09-30 2014-09-30 Iterative video image retrieval method and device

Publications (1)

Publication Number Publication Date
CN105528364A true CN105528364A (en) 2016-04-27

Family

ID=55770593

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201410520736.1A Pending CN105528364A (en) 2014-09-30 2014-09-30 Iterative video image retrieval method and device

Country Status (1)

Country Link
CN (1) CN105528364A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023084A (en) * 2016-05-27 2016-10-12 浙江工业大学 Initial iteration control signal extraction method based on time scale transform
CN111506751A (en) * 2020-04-20 2020-08-07 创景未来(北京)科技有限公司 Method and device for searching mechanical drawing

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004095254A1 (en) * 2003-03-21 2004-11-04 Seagate Technology Llc Semantically-aware, dynamic, window-based disc scheduling method and apparatus for better fulfilling application requirements
US20110302193A1 (en) * 2010-06-07 2011-12-08 Microsoft Corporation Approximation framework for direct optimization of information retrieval measures
US20130132377A1 (en) * 2010-08-26 2013-05-23 Zhe Lin Systems and Methods for Localized Bag-of-Features Retrieval
CN104036229A (en) * 2013-03-10 2014-09-10 江南大学 Regression-based active appearance model initialization method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2004095254A1 (en) * 2003-03-21 2004-11-04 Seagate Technology Llc Semantically-aware, dynamic, window-based disc scheduling method and apparatus for better fulfilling application requirements
US20110302193A1 (en) * 2010-06-07 2011-12-08 Microsoft Corporation Approximation framework for direct optimization of information retrieval measures
US20130132377A1 (en) * 2010-08-26 2013-05-23 Zhe Lin Systems and Methods for Localized Bag-of-Features Retrieval
CN104036229A (en) * 2013-03-10 2014-09-10 江南大学 Regression-based active appearance model initialization method

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106023084A (en) * 2016-05-27 2016-10-12 浙江工业大学 Initial iteration control signal extraction method based on time scale transform
CN106023084B (en) * 2016-05-27 2019-07-26 浙江工业大学 First iterating control signal extracting method based on time scale transformation
CN111506751A (en) * 2020-04-20 2020-08-07 创景未来(北京)科技有限公司 Method and device for searching mechanical drawing

Similar Documents

Publication Publication Date Title
Cordts et al. The cityscapes dataset
CN104754413B (en) Method and apparatus for identifying television signals and recommending information based on image search
CN105335387A (en) Retrieval method for video cloud storage system
US11151386B1 (en) Automated identification and tagging of video content
US20160342688A1 (en) Large scale video analytics architecture
CN106557521B (en) Object indexing method, object searching method and object indexing system
JP2011519454A (en) Media asset management
CN101369281A (en) Retrieval method based on video abstract metadata
CN110543584B (en) Method, device, processing server and storage medium for establishing face index
CN102236714A (en) Extensible markup language (XML)-based interactive application multimedia information retrieval method
CN103412859A (en) Method and device for rapidly searching for enormous videos based on media asset management system
JP5633647B2 (en) VIDEO PROCESSING SYSTEM, VIDEO PROCESSING METHOD, VIDEO PROCESSING DATABASE GENERATION METHOD AND ITS DATABASE, VIDEO PROCESSING DEVICE, ITS CONTROL METHOD AND CONTROL PROGRAM
CN103617263A (en) Automatic TV advertisement movie clip detection method based on multi-mode features
CN103475532A (en) Hardware detection method and system thereof
CN105528364A (en) Iterative video image retrieval method and device
CN107369450B (en) Recording method and recording apparatus
KR102375145B1 (en) Integrated video data analysis and management system
CN107241618B (en) Recording method and recording apparatus
US9275140B2 (en) Method of optimizing the search for a scene on the basis of a stream of images archived in a video database
EP2696578A1 (en) Video processing system, video processing method, video processing device, method for controlling same, and recording medium storing control program
CN111666263A (en) Method for realizing heterogeneous data management in data lake environment
Chenot et al. A large-scale audio and video fingerprints-generated database of tv repeated contents
US20170301373A1 (en) Positional Recording Synchronization System
CN115017339A (en) Media file multimode retrieval method and system based on AI algorithm
US11954151B1 (en) Natural language processing for searching security video data

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20160427

WD01 Invention patent application deemed withdrawn after publication