CN106326844A - Recording system for intelligently monitoring image, and application of recording system - Google Patents
Recording system for intelligently monitoring image, and application of recording system Download PDFInfo
- Publication number
- CN106326844A CN106326844A CN201610670642.1A CN201610670642A CN106326844A CN 106326844 A CN106326844 A CN 106326844A CN 201610670642 A CN201610670642 A CN 201610670642A CN 106326844 A CN106326844 A CN 106326844A
- Authority
- CN
- China
- Prior art keywords
- video
- camera lens
- camera
- represent
- significance
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
Abstract
The invention provides a recording system for intelligently monitoring an image. A server for recording the movement tracks of positioners are provided with a signal receiving device, and the interior of the server for recording the movement tracks of the positioners is provided with plans of cavities which are divided by longitude and latitude lines. A camera group composed of a plurality of cameras carries out the photographing of the cavities, divided by longitude and latitude lines, in a manner of alignment, and stores the recorded images. A plurality of positioners which are used for determining the positions are respectively provided with different recognition numbers. The beneficial effects of the invention lie in that the system quickly generates a photographed image of the positioner according to the movement track of the positioner, and improves the work efficiency.
Description
Technical field
The present invention relates to the intelligent monitoring picture record system of the picture specifically referring to a kind of people that can quickly generate and specify
And application.
Background technology
It is designed with photographic head in existing kindergarten, nursery school, and when requiring to look up someone picture of certain period, often needs
Want artificial fleshing eye check and judge, the most extremely lose time.
Video summarization technique can effectively be removed and do not comprises the video segment of notable video semantic event and compress user's sense
The video segment of interest, thus the main contents of video data time long are characterized with the video segment simplified, for reality
Existing fast video browses, retrieve and to reduce storage demand significant.Mostly existing video summarization technique is for monocular
The video content of camera collection automatically analyzes.But when regarding in the face of the magnanimity of the multi-cam collection with overlapping region
Frequently, time, only prior art is applied to each separate camera and gathers video and carry out during video frequency abstract, have ignored each photographic head and adopt
Collection time of video content and the associate feature in space and the repeatability of content, the most existing method can not effectively be removed many
Photographic head redundant video content.
Summary of the invention
The present invention is directed to problem above, it is provided that the intelligent monitoring picture note of a kind of picture that can quickly generate the people specified
Recording system and application thereof.
The goal of the invention of the present invention is realized by below scheme: a kind of intelligent monitoring picture record system, including several
For determining the localizer of position, for the server of record locator movement locus, by the chamber separated through parallel with by many
The shooting unit of individual video camera composition, for determining that the localizer of position is provided with sender unit, for record locator
The server of movement locus is provided with signal receiving device, is provided with by longitude and latitude in the server of record locator movement locus
The plane graph of the chamber that line separates, the shooting unit alignment being made up of multiple video cameras is imaged by the chamber separated through parallel and is protected
Depositing, several are for determining that the localizer of position is respectively provided with different identifiers.
Further, the shooting unit being made up of multiple video cameras shoots by the chamber separated through parallel without dead angle.
Further, the shooting unit being made up of multiple video cameras is at least made up of two groups of shooting units set up in opposite directions,
And the adjacent camera of each group is interlaced.
Further, localizer is provided with WIFI locating module.
A kind of intelligent monitoring picture record systematic difference, in the server of record locator movement locus, input is fixed
The identifier of position device, the server for record locator movement locus exists according to the movement locus of the localizer of input identifier
By multiple video cameras form shooting unit photograph by the picture of the chamber separated through parallel is selected correspondence shooting picture
Face.
Further, localizer is at the movement locus by the chamber separated through parallel, by the location, face being separated into through parallel
And quickly search.
Further, (1) pretreatment: video lens segmentation, key-frame extraction and significance represent: utilize regarding of image
Feel that the video of each camera collection is carried out shot segmentation and key-frame extraction by feature and unsupervised clustering respectively;By the back of the body
Scape modeling and Acquiring motion area weigh the exercise intensity of each camera lens, retain exercise intensity enough significantly camera lens and pluck as video
The candidate wanted;Extract the color of key frame of video, texture and shape facility and to build mathematical model each solely to calculate candidate's camera lens
The importance of vertical feature, finally uses linear mode to merge multiple features importance, thus forms video lens significance and represent;
(2) across camera video camera lens network struction and analysis: to characterize, across camera video camera lens network, extracted each
The time of candidate's camera lens of video and the association in space, wherein, each node Sij and video segment represent i-th photographic head
Jth camera lens in the video gathered, each nodal values represents that camera lens significance, internodal line represent that the two is associated,
And strength of association is by the similarity measurement of two camera lenses, on the basis of the segmentation of above-mentioned video lens and significance represent, obtains
The required each node building network and importance thereof, the key problem across camera video camera lens network struction and analysis is converted into
The discovery of similar camera lens group, is realized by following two steps:
The first step: the calculating of strength of association between node: calculate and the two class similaritys that merge between camera lens thus measure two camera lenses
Similarity: a. sequential correlation: the camera lens that in different video, sequential is close more likely comprises the video content that vision is close,
Thus similarity is the highest;B. visual similarity: the shot similarity with the description of close Low Level Vision is the highest;
Second step: decouple across camera video camera lens network: use top-down network group analytic process to carry out across photographic head
Video lens network decomposition;
(3) based on user's request across camera video summarization generation: video abstraction extraction method is: for comprising n mirror
Head camera lens group Cs, the video frequency abstract ultimately generated whether comprise certain camera lens by label vector x=x1 ...,
Xi..., xn} represents, wherein xi is to represent when 1 that this camera lens is retained, and xi is to represent when 0 that this camera lens is removed, and definition is many
Objective optimisation problems object function is: wherein: representing summary overall length, Fi represents camera lens i correspondence frame number;Represent video content
Significance, Si represents camera lens i significance;Fmax and Smin represent respectively when video frequency abstract generates limit greatest length and
Minimum significance;N () represents normalization operation, uses SYSTEM OF LINEAR VECTOR normalization;Factor alpha i is for meeting different demand
Dynamic abstract generates, user freely specify, and above-mentioned multi-objective optimization question is typical integer programming problem, uses paced beat
Draw Algorithm for Solving.
The beneficial effects of the present invention is: according to the movement locus of localizer, quickly produce the camera picture of this localizer,
Improve work efficiency.
Accompanying drawing explanation
Fig. 1 is the structural representation of the present invention.
Detailed description of the invention
Below in conjunction with specific embodiments and the drawings, the invention will be further described:
Referring to the drawings shown in 1, the present invention: a kind of intelligent monitoring picture record system, including several for determining determining of position
Position device 2, for record locator movement locus server 1, by the chamber 4 separated through parallel and be made up of multiple video cameras
Shooting unit 3, for determining that the localizer 2 of position is provided with sender unit, for the clothes of record locator movement locus
Business device 1 is provided with signal receiving device, is provided with by the chamber separated through parallel in the server 1 of record locator movement locus
The plane graph of room 4, the shooting unit 3 being made up of multiple video cameras is directed at by the chamber shooting 4 separated through parallel and preserves, some
The individual localizer 2 for determining position is respectively provided with different identifiers.
A kind of intelligent monitoring picture record system, several separate by through parallel for determining that the localizer 2 of position is placed in
Chamber shooting 4 in
A kind of intelligent monitoring picture record system, the shooting unit 3 being made up of multiple video cameras divides by through parallel without dead angle shooting
Every chamber 4.
A kind of intelligent monitoring picture record system, multiple shooting units 3 the shooting unit become at least is set in opposite directions by two groups
Vertical shooting unit is constituted, and the adjacent camera of each group is interlaced.
A kind of intelligent monitoring picture record system, several are for determining that the localizer 2 of position is provided with WIFI and positions mould
Block.
A kind of intelligent monitoring picture record systematic difference, inputs in the server 2 of record locator movement locus
The identifier of localizer, for the server 1 of record locator movement locus according to the motion rail of localizer 2 of input identifier
Mark be made up of multiple video cameras shooting unit 3 photograph by the picture of the chamber 4 separated through parallel in select correspondence
Shooting picture.
A kind of intelligent monitoring picture record systematic difference, several are for determining that the localizer 2 of position is by through parallel
The movement locus of the chamber 4 separated, by the location, face being separated into through parallel and quickly searches.
A kind of intelligent monitoring picture record systematic difference, (1) pretreatment: video lens segmentation, key-frame extraction and aobvious
Work represents: utilize the visual signature of image and unsupervised clustering the video of each camera collection to be carried out camera lens respectively and divides
Cut and key-frame extraction;Weighed the exercise intensity of each camera lens by background modeling and Acquiring motion area, retain exercise intensity foot
Enough obvious camera lenses are as the candidate of video frequency abstract;Extract the color of key frame of video, texture and shape facility and build mathematics
Model calculates the importance of each independent characteristic of candidate's camera lens, finally uses linear mode to merge multiple features importance, thus shape
Video lens significance is become to represent;
(2) across camera video camera lens network struction and analysis: to characterize, across camera video camera lens network, extracted each
The time of candidate's camera lens of video and the association in space, wherein, each node Sij and video segment represent i-th photographic head
Jth camera lens in the video gathered, each nodal values represents that camera lens significance, internodal line represent that the two is associated,
And strength of association is by the similarity measurement of two camera lenses, on the basis of the segmentation of above-mentioned video lens and significance represent, obtains
The required each node building network and importance thereof, the key problem across camera video camera lens network struction and analysis is converted into
The discovery of similar camera lens group, is realized by following two steps:
The first step: the calculating of strength of association between node: calculate and the two class similaritys that merge between camera lens thus measure two camera lenses
Similarity: a. sequential correlation: the camera lens that in different video, sequential is close more likely comprises the video content that vision is close,
Thus similarity is the highest;B. visual similarity: the shot similarity with the description of close Low Level Vision is the highest;
Second step: decouple across camera video camera lens network: use top-down network group analytic process to carry out across photographic head
Video lens network decomposition;
(3) based on user's request across camera video summarization generation: video abstraction extraction method is: for comprising n mirror
Head camera lens group Cs, the video frequency abstract ultimately generated whether comprise certain camera lens by label vector x=x1 ...,
Xi..., xn} represents, wherein xi is to represent when 1 that this camera lens is retained, and xi is to represent when 0 that this camera lens is removed, and definition is many
Objective optimisation problems object function is: wherein: representing summary overall length, Fi represents camera lens i correspondence frame number;Represent video content
Significance, Si represents camera lens i significance;Fmax and Smin represent respectively when video frequency abstract generates limit greatest length and
Minimum significance;N () represents normalization operation, uses SYSTEM OF LINEAR VECTOR normalization;Factor alpha i is for meeting different demand
Dynamic abstract generates, user freely specify, and above-mentioned multi-objective optimization question is typical integer programming problem, uses paced beat
Draw Algorithm for Solving.
Although the present invention is by being shown and described with reference to preferred embodiment, but, ordinary skill
Personnel are it is to be appreciated that the description of above-described embodiment can be not limited to, and in the range of claims, can make form and details
On various changes.
Claims (7)
1. an intelligent monitoring picture record system, it is characterised in that: include several for determine position localizer, for
The server of record locator movement locus, by the chamber separated through parallel and the shooting unit being made up of multiple video cameras, use
Being provided with sender unit in the localizer determining position, the server for record locator movement locus is provided with signal
Receive device, be provided with by the plane graph of the chamber separated through parallel in the server of record locator movement locus, by many
The shooting unit alignment of individual video camera composition is imaged by the chamber separated through parallel and is preserved, and several are for determining determining of position
Position device is respectively provided with different identifiers.
A kind of intelligent monitoring picture record system the most according to claim 1, it is characterised in that: described by multiple video cameras
The shooting unit of composition shoots by the chamber separated through parallel without dead angle.
A kind of intelligent monitoring picture record system the most according to claim 1, it is characterised in that: it is made up of multiple video cameras
Shooting unit be at least made up of two groups of shooting units set up in opposite directions, and the adjacent camera of each group is interlaced.
A kind of intelligent monitoring picture record system the most according to claim 1, it is characterised in that: localizer is provided with WIFI
Locating module.
5. an intelligent monitoring picture record systematic difference according to claim 1, it is characterised in that: it is used for recording and determines
The server of position device movement locus inputs the identifier of localizer, for the server of record locator movement locus according to defeated
Enter identifier localizer movement locus the shooting unit being made up of multiple video cameras photograph by separating through parallel
The picture of chamber is selected the shooting picture of correspondence.
Intelligent monitoring picture record systematic difference the most according to claim 5, it is characterised in that: localizer is by longitude and latitude
The movement locus of the chamber that line separates, by the location, face being separated into through parallel and quickly searches.
Intelligent monitoring picture record systematic difference the most according to claim 6, it is characterised in that: (1) pretreatment: regard
Frequently shot segmentation, key-frame extraction and significance represent: utilize the visual signature of image and unsupervised clustering by each shooting
The video that head gathers carries out shot segmentation and key-frame extraction respectively;Each camera lens is weighed by background modeling and Acquiring motion area
Exercise intensity, retain exercise intensity enough significantly camera lens as the candidate of video frequency abstract;Extract key frame of video color,
Texture and shape facility also build mathematical model to calculate the importance of each independent characteristic of candidate's camera lens, finally use linear mode
Merge multiple features importance, thus form video lens significance and represent;
(2) across camera video camera lens network struction and analysis: to characterize, across camera video camera lens network, extracted each
The time of candidate's camera lens of video and the association in space, wherein, each node Sij and video segment represent i-th photographic head
Jth camera lens in the video gathered, each nodal values represents that camera lens significance, internodal line represent that the two is associated,
And strength of association is by the similarity measurement of two camera lenses, on the basis of the segmentation of above-mentioned video lens and significance represent, obtains
The required each node building network and importance thereof, the key problem across camera video camera lens network struction and analysis is converted into
The discovery of similar camera lens group, is realized by following two steps:
The first step: the calculating of strength of association between node: calculate and the two class similaritys that merge between camera lens thus measure two camera lenses
Similarity: a. sequential correlation: the camera lens that in different video, sequential is close more likely comprises the video content that vision is close,
Thus similarity is the highest;B. visual similarity: the shot similarity with the description of close Low Level Vision is the highest;
Second step: decouple across camera video camera lens network: use top-down network group analytic process to carry out across photographic head
Video lens network decomposition;
(3) based on user's request across camera video summarization generation: video abstraction extraction method is: for comprising n mirror
Head camera lens group Cs, the video frequency abstract ultimately generated whether comprise certain camera lens by label vector x=x1 ...,
Xi..., xn} represents, wherein xi is to represent when 1 that this camera lens is retained, and xi is to represent when 0 that this camera lens is removed, and definition is many
Objective optimisation problems object function is: wherein: representing summary overall length, Fi represents camera lens i correspondence frame number;Represent video content
Significance, Si represents camera lens i significance;Fmax and Smin represent respectively when video frequency abstract generates limit greatest length and
Minimum significance;N () represents normalization operation, uses SYSTEM OF LINEAR VECTOR normalization;Factor alpha i is for meeting different demand
Dynamic abstract generates, user freely specify, and above-mentioned multi-objective optimization question is typical integer programming problem, uses paced beat
Draw Algorithm for Solving.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610670642.1A CN106326844A (en) | 2016-08-15 | 2016-08-15 | Recording system for intelligently monitoring image, and application of recording system |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201610670642.1A CN106326844A (en) | 2016-08-15 | 2016-08-15 | Recording system for intelligently monitoring image, and application of recording system |
Publications (1)
Publication Number | Publication Date |
---|---|
CN106326844A true CN106326844A (en) | 2017-01-11 |
Family
ID=57740976
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201610670642.1A Pending CN106326844A (en) | 2016-08-15 | 2016-08-15 | Recording system for intelligently monitoring image, and application of recording system |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN106326844A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110366050A (en) * | 2018-04-10 | 2019-10-22 | 北京搜狗科技发展有限公司 | Processing method, device, electronic equipment and the storage medium of video data |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101651784A (en) * | 2009-09-24 | 2010-02-17 | 上海交通大学 | Video tracking system of panoramic pan-tilt-zoom camera |
CN102184242A (en) * | 2011-05-16 | 2011-09-14 | 天津大学 | Cross-camera video abstract extracting method |
CN102819528A (en) * | 2011-06-10 | 2012-12-12 | 中国电信股份有限公司 | Method and device for generating video abstraction |
CN102868875A (en) * | 2012-09-24 | 2013-01-09 | 天津市亚安科技股份有限公司 | Multidirectional early-warning positioning and automatic tracking and monitoring device for monitoring area |
CN104210500A (en) * | 2014-09-03 | 2014-12-17 | 中国铁道科学研究院 | Overhead lines suspension state detecting and monitoring device and working method thereof |
US20160105617A1 (en) * | 2014-07-07 | 2016-04-14 | Google Inc. | Method and System for Performing Client-Side Zooming of a Remote Video Feed |
-
2016
- 2016-08-15 CN CN201610670642.1A patent/CN106326844A/en active Pending
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101651784A (en) * | 2009-09-24 | 2010-02-17 | 上海交通大学 | Video tracking system of panoramic pan-tilt-zoom camera |
CN102184242A (en) * | 2011-05-16 | 2011-09-14 | 天津大学 | Cross-camera video abstract extracting method |
CN102819528A (en) * | 2011-06-10 | 2012-12-12 | 中国电信股份有限公司 | Method and device for generating video abstraction |
CN102868875A (en) * | 2012-09-24 | 2013-01-09 | 天津市亚安科技股份有限公司 | Multidirectional early-warning positioning and automatic tracking and monitoring device for monitoring area |
US20160105617A1 (en) * | 2014-07-07 | 2016-04-14 | Google Inc. | Method and System for Performing Client-Side Zooming of a Remote Video Feed |
CN104210500A (en) * | 2014-09-03 | 2014-12-17 | 中国铁道科学研究院 | Overhead lines suspension state detecting and monitoring device and working method thereof |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110366050A (en) * | 2018-04-10 | 2019-10-22 | 北京搜狗科技发展有限公司 | Processing method, device, electronic equipment and the storage medium of video data |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN109947975B (en) | Image search device, image search method, and setting screen used therein | |
Gao et al. | Tall: Temporal activity localization via language query | |
CN106845357B (en) | A kind of video human face detection and recognition methods based on multichannel network | |
Kanade et al. | First-person vision | |
KR100651010B1 (en) | Image matching system using 3-dimensional object model, image matching method, and computer readable recording medium which records image matching program | |
CN110717411A (en) | Pedestrian re-identification method based on deep layer feature fusion | |
CN110503076B (en) | Video classification method, device, equipment and medium based on artificial intelligence | |
WO2019019935A1 (en) | Interaction method, interaction terminal, storage medium, and computer device | |
Alameda-Pineda et al. | RAVEL: An annotated corpus for training robots with audiovisual abilities | |
CN110428449A (en) | Target detection tracking method, device, equipment and storage medium | |
CN109902681B (en) | User group relation determining method, device, equipment and storage medium | |
CN113408566A (en) | Target detection method and related equipment | |
CN107038400A (en) | Face identification device and method and utilize its target person tracks of device and method | |
CN114298170A (en) | Multi-mode conference data structuring method and device and computer equipment | |
Suchan et al. | The geometry of a scene: On deep semantics for visual perception driven cognitive film, studies | |
Fu et al. | Learning semantic-aware spatial-temporal attention for interpretable action recognition | |
Fei et al. | Flow-pose Net: An effective two-stream network for fall detection | |
CN114998928A (en) | Cross-modal pedestrian re-identification method based on multi-granularity feature utilization | |
RU2005100267A (en) | METHOD AND SYSTEM OF AUTOMATIC VERIFICATION OF THE PRESENCE OF A LIVING FACE OF A HUMAN IN BIOMETRIC SECURITY SYSTEMS | |
Zhao et al. | SPACE: Finding key-speaker in complex multi-person scenes | |
CN106326844A (en) | Recording system for intelligently monitoring image, and application of recording system | |
Wang et al. | Listen, look, and find the one: Robust person search with multimodality index | |
Park et al. | Intensity classification background model based on the tracing scheme for deep learning based CCTV pedestrian detection | |
CN112101154B (en) | Video classification method, apparatus, computer device and storage medium | |
Guo et al. | A novel pedestrian reidentification method based on a multiview generative adversarial network |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
C10 | Entry into substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20170111 |