WO2016119528A1 - 视频数据的智能处理方法及系统 - Google Patents

视频数据的智能处理方法及系统 Download PDF

Info

Publication number
WO2016119528A1
WO2016119528A1 PCT/CN2015/096817 CN2015096817W WO2016119528A1 WO 2016119528 A1 WO2016119528 A1 WO 2016119528A1 CN 2015096817 W CN2015096817 W CN 2015096817W WO 2016119528 A1 WO2016119528 A1 WO 2016119528A1
Authority
WO
WIPO (PCT)
Prior art keywords
data
video data
time
intelligent
retrieval
Prior art date
Application number
PCT/CN2015/096817
Other languages
English (en)
French (fr)
Inventor
王伟
林起芊
汪渭春
Original Assignee
杭州海康威视数字技术股份有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 杭州海康威视数字技术股份有限公司 filed Critical 杭州海康威视数字技术股份有限公司
Priority to US15/537,462 priority Critical patent/US10178430B2/en
Priority to EP15879726.6A priority patent/EP3253042B1/en
Publication of WO2016119528A1 publication Critical patent/WO2016119528A1/zh

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/44008Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics in the video stream
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/21Server components or server architectures
    • H04N21/218Source of audio or video content, e.g. local disk arrays
    • H04N21/2187Live feed
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/234Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs
    • H04N21/23418Processing of video elementary streams, e.g. splicing of video streams or manipulating encoded video stream scene graphs involving operations for analysing video streams, e.g. detecting features or characteristics
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/235Processing of additional data, e.g. scrambling of additional data or processing content descriptors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/20Servers specifically adapted for the distribution of content, e.g. VOD servers; Operations thereof
    • H04N21/23Processing of content or additional data; Elementary server operations; Server middleware
    • H04N21/236Assembling of a multiplex stream, e.g. transport stream, by combining a video stream with other content or additional data, e.g. inserting a URL [Uniform Resource Locator] into a video stream, multiplexing software data into a video stream; Remultiplexing of multiplex streams; Insertion of stuffing bits into the multiplex stream, e.g. to obtain a constant bit-rate; Assembling of a packetised elementary stream
    • H04N21/23614Multiplexing of additional data and video streams
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N21/00Selective content distribution, e.g. interactive television or video on demand [VOD]
    • H04N21/40Client devices specifically adapted for the reception of or interaction with content, e.g. set-top-box [STB]; Operations thereof
    • H04N21/43Processing of content or additional data, e.g. demultiplexing additional data from a digital video stream; Elementary client operations, e.g. monitoring of home network or synchronising decoder's clock; Client middleware
    • H04N21/44Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs
    • H04N21/4408Processing of video elementary streams, e.g. splicing a video clip retrieved from local storage with an incoming video stream or rendering scenes according to encoded video stream scene graphs involving video stream encryption, e.g. re-encrypting a decrypted video stream for redistribution in a home network
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/60Control of cameras or camera modules
    • H04N23/68Control of cameras or camera modules for stable pick-up of the scene, e.g. compensating for camera body vibrations
    • H04N23/681Motion detection
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/765Interface circuits between an apparatus for recording and another apparatus
    • H04N5/77Interface circuits between an apparatus for recording and another apparatus between a recording apparatus and a television camera
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/91Television signal processing therefor
    • H04N5/92Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N5/9201Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving the multiplexing of an additional signal and the video signal
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/183Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a single remote source
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/292Multi-camera tracking
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Definitions

  • the present application relates to video processing technologies, and in particular, to an intelligent processing method and system for video data.
  • each network camera collects the video data, and sends the video stream to the network storage server for storage.
  • feature requirements can be set to combine the intelligent information to extract the video data that meets the requirements for playback, thereby improving the video search efficiency and implementing intelligent processing.
  • This feature requires feature information of the moving object in the image, such as all video data of the moving vehicle in the image.
  • the intelligent processing scheme of the existing video data is specifically implemented in the following manner:
  • a separate video analytics server is configured, and the video analytics server sets feature rules, such as the presence of moving vehicle data in all video images.
  • the video analysis server After each IPC stores the video stream to the storage server, the video analysis server periodically reads the video data from the storage server for analysis, and generates and stores the smart information when the feature rule is satisfied; the corresponding video data is recorded in the smart information. A parameter that satisfies the feature rule. Then, when the video data needs to be played in combination with the feature rule, the video data that satisfies the requirement can be determined according to the smart information, and played.
  • the existing solution adopts the post-analysis mode. After the IPC stores the video stream to the storage server, the intelligent data is analyzed periodically, and only the historical stream can be processed.
  • the IPC data storage also has a certain periodicity, not real-time. Storage, for video data that has been collected by IPC but not yet stored to the storage server, cannot be analyzed. And the analysis of all IPC video streams is done by an independent video analysis server, which has a huge workload and takes a long time, which increases the technical difficulty of the video analysis server. It can be seen that existing solutions for storing video data and intelligent data have the drawback of poor timeliness.
  • the present application provides an intelligent processing method for video data, which can perform intelligent processing on the collected video data in real time.
  • the application provides an intelligent processing system for video data, which can intelligently process the collected video data in real time.
  • the smart camera performs video data collection, and analyzes the collected video data in real time. If the alarm rule is met, the smart data is generated, and the smart data includes the encoder identifier and the motion track information;
  • the smart camera encapsulates the video data and the intelligent data into a data stream and sends the data to the frame analysis component in the cloud storage system;
  • the frame analysis component decapsulates the received data stream, obtains video data and intelligent data, and stores the video data and the smart data in the storage component respectively;
  • the storage component sends the storage address information of the video data and the intelligent data to the index server for recording separately.
  • An intelligent processing system for video data comprising a smart camera, a frame analysis component, an index server and a plurality of storage components;
  • the smart camera sets an alarm rule; the smart camera performs video data collection, and analyzes the collected video data in real time. If the alarm rule is met, the smart data is generated, and the smart data includes the encoder identifier and the motion track information; The data and the intelligent data are encapsulated together into a data stream and sent to the frame analysis component;
  • the frame analysis component decapsulates the received data stream, obtains video data and intelligent data, and stores video data and smart data in the storage component respectively;
  • the storage component sends the storage address information of the video data and the smart data to the index server after storing the video data and the smart data;
  • the index server separately records the received storage address information about the video data and the smart data.
  • the smart camera sets an alarm rule
  • the smart camera performs video data collection, and analyzes the collected video data in real time, and if the alarm rule is met, generates smart data.
  • the intelligent data includes the encoder identifier and the motion track information; the smart camera encapsulates the video data and the smart data into a data stream and sends the data to the frame analysis component in the cloud storage system; the frame analysis component decapsulates the received data stream to obtain The video data and the intelligent data respectively store the video data and the smart data in the storage component; the storage component sends the storage address information of the video data and the intelligent data to the index server for recording separately.
  • the application analyzes the collected video data in real time by the smart camera, and uses the cloud storage method to send the video data together with the analyzed intelligent data to the cloud storage system for storage separately; thereby realizing intelligent processing of the collected video data.
  • the intelligent data processing work done by the independent video analysis server in the prior art is distributed to each smart camera to complete, and the speed is fast, and the implementation difficulty is greatly reduced.
  • FIG. 1 is a schematic flowchart of an intelligent processing method for video data of the present application
  • FIG. 2 is a schematic diagram of an example of video data and intelligent data stored in a storage component of the present application
  • FIG. 3 is a schematic diagram of an example of an intelligent storage method for video data of the present application.
  • FIG. 4 is a flowchart of an example of a smart playback method for video data of the present application
  • FIG. 5 is a schematic diagram of a schematic diagram of target cross-line retrieval in intelligent playback of the present application.
  • FIG. 6 is a schematic diagram of a schematic diagram of target intrusion area retrieval in intelligent playback of the present application.
  • FIG. 7 is a schematic diagram of an example of missing retrieval of a target item in smart playback of the present application.
  • FIG. 8 is a schematic structural diagram of an intelligent processing system for video data of the present application.
  • the smart camera analyzes the collected video data in real time, and uses the cloud storage method to send the video data together with the analyzed intelligent data to the cloud storage system for storage separately; The intelligent processing of the collected video data is realized.
  • FIG. 1 is a schematic flowchart of an intelligent processing method for video data of the present application.
  • cameras for performing video data collection are smart cameras, and functions can be extended according to characteristics of the smart cameras.
  • Rules the smart camera analyzes the collected video data in real time according to the alarm rules to generate intelligent data.
  • the process of Figure 1 can include the following steps:
  • Step 101 The smart camera performs video data collection, and analyzes the collected video data in real time. If the alarm rule is met, the smart data is generated, and the smart data includes the encoder identifier and the motion track information.
  • the alarm rule includes at least one of target motion information and target feature information, and the target motion information includes position range information and motion change information.
  • the target motion information includes location range information and motion change information; the location range information is specifically a circular area, a square area, and the like determined within a video monitoring range, and the motion change information is specifically as a target cross line, a target intrusion, and a target item remains. Wait.
  • the alarm rules may also include target feature information such as a person's gender, age range, eye spacing, wearing glasses, etc., the color and inclination angle of the license plate, and the color of the vehicle body, as needed.
  • the encoder identifier is the smart camera identifier
  • the motion track information is the motion track information that satisfies the target of the alarm rule, that is, the motion track information in the position range.
  • the position range information in the target motion information is a square area, denoted by A
  • the motion change information is that the moving target enters A from outside A
  • the motion trajectory information is motion coordinate in the position range A information.
  • the content included in the alarm rule can be set as required.
  • the generated smart data also includes the target feature parameter; for example, The target feature information is the gender of the person, and the generated intelligent data further includes the target feature parameter. If the target feature parameter is 0, the target of the invaded area is female, and the target of the invaded area is male.
  • the video data is analyzed to identify motion information and feature information in the video image, which is prior art; for example, the motion trajectory of the target and the feature information of the moving target, including the gender, age, and eye spacing of the person, can be identified. Whether to wear glasses, etc.; the color and inclination of the license plate, the color of the body, etc.
  • Step 102 The smart camera encapsulates the video data and the smart data into a data stream and sends the data to the frame analysis component in the cloud storage system.
  • the video data collected in real time and the intelligent data generated in real time are encapsulated together to obtain a data stream (PS, Program Stream).
  • Step 103 The frame analysis component decapsulates the received data stream to obtain video data and intelligent data, and stores the video data and the smart data in the storage component.
  • the video data and the intelligent data are distributed and stored in each storage component of the cloud storage system in the form of data blocks.
  • the usage status of each data block in all storage components is stored in the index server.
  • the frame analysis component When there is data to be written, the frame analysis component first requests an idle data block from the index server, and the index server selects a storage component according to a certain distributed storage strategy, and then selects an appropriate free data block on the corresponding storage component, and selects the selected data.
  • the block address information is fed back to the frame analysis component.
  • the frame analysis component writes the video data and the smart data into the corresponding data blocks according to the data block address information.
  • FIG. 2 is a schematic diagram of a schematic diagram of video data and intelligent data stored in a storage component of the present application.
  • the dotted block is a video data block
  • the solid line block is an intelligent data block.
  • Step 104 The storage component sends the storage address information of the video data and the smart data to the index server for recording separately.
  • index server storage address information about video data and storage address information about smart data are separately stored.
  • the storage address information about the video data includes address information, an encoder identifier (ID), and time. Point, etc., the encoder identifier (ID) is used to represent the corresponding smart camera, and the time point is used to indicate the time corresponding to the video data.
  • the storage address information about the smart data also includes address information, an encoder identifier (ID), a time point, and the like.
  • the address information includes storage component information and data block information.
  • the time point represents the time when the intelligent data is stored; specifically, when the storage component stores the smart data, the stored smart data is also loaded with the local time stamp when stored.
  • FIG. 3 is a schematic diagram of an example of an intelligent storage method for video data of the present application.
  • An example of performing video data and intelligent data storage is provided.
  • a user accesses a smart IPC webpage, and an alarm is set for the smart IPC through the intelligent IPC platform.
  • Rules, intelligent IPC stores alarm rules; then intelligent IPC collection alarm rules for data storage.
  • the application sets an alarm rule on the smart camera, specifically: the smart camera performs video data collection, analyzes the collected video data in real time, and if the alarm rule is met, generates intelligent data, where the intelligent data includes the encoder identifier and the motion track information.
  • the smart camera encapsulates the video data and the intelligent data into a data stream and sends it to the frame analysis component in the cloud storage system; the frame analysis component decapsulates the received data stream to obtain video data and intelligent data in the storage component.
  • the video data and the smart data are respectively stored; the storage component sends the storage address information of the video data and the intelligent data to the index server for recording separately.
  • the application analyzes the collected video data in real time by the smart camera, and uses the cloud storage method to send the video data together with the analyzed intelligent data to the cloud storage system for storage separately; thereby realizing intelligent processing of the collected video data. .
  • the intelligent data processing work done by the independent video analysis server in the prior art is distributed to each smart camera, and the speed is fast, and the implementation difficulty is greatly reduced.
  • the video data and the smart data are stored, the video data can be played back as needed; and during the playback process, the video data that meets the condition requirements can be quickly extracted and played based on the intelligent data.
  • FIG. 4 is an example of a flowchart of an intelligent playback method for video data of the present application, which may include the following steps:
  • Step 401 The platform server receives the playback request and sends the request to the index server, where the playback request includes an encoder identifier and a playback time range.
  • the platform server provides external services such as indexing and playback of video data.
  • the user visits Ask the platform server to enter a playback request.
  • Step 402 The index server queries the storage address information of the corresponding video data according to the playback request, and sends an extraction request to the corresponding storage component according to the storage address information.
  • the storage address information that satisfies the condition can be queried.
  • Step 403 The storage component reads the corresponding video data according to the extraction request, and sends the corresponding video data to the platform server.
  • the extraction request includes address information, and after receiving the extraction request, the storage component reads the corresponding video data according to the address information.
  • Step 404 The platform server plays the video data, receives the retrieval task during the playing process, and sends the retrieval task to the index server, where the retrieval task includes an encoder identifier, a retrieval time range, and a retrieval rule, where the retrieval rule includes target motion information.
  • the target motion information includes location range information and motion change information.
  • the user can input the retrieval task as needed, and the input manners thereof are various, for example, entering the rule setting interface, and inputting the contents of the retrieval task according to the prompt. It is also possible to input the retrieval task in combination with the screen drawing and the interface setting. Specifically, the finger can be touched on the screen by touching the screen, the touch pen or the mouse to draw the position range information; the following two implementation modes are as follows:
  • the location range information drawn by the user on the pause screen is received; and the motion change information input at the rule setting interface is received.
  • Method 2 receiving, during the playing process, location range information drawn by the user on the pause screen; and receiving motion change information and target feature information input on the rule setting interface; the motion change information includes target cross-line information and target intrusion information Or information about the target item, etc.
  • Step 405 The index server queries the storage address information of the corresponding smart data according to the encoder identifier and the retrieval time range in the retrieval task, and sends a retrieval request to the corresponding storage component according to the storage address information.
  • Step 406 The storage component receives the retrieval request, reads the corresponding intelligent data, and determines a time point that satisfies the retrieval rule according to the read intelligent data.
  • the search request includes address information, and the corresponding smart data can be read according to the address information.
  • the intelligent data includes corresponding time points, and the intelligent data satisfying the search rule is determined, and then the time point at which the search rule is satisfied can be determined.
  • the point in time at which the requirement is met can be obtained directly from the local timestamp of the intelligent data.
  • the time point is also the local timestamp. It is also possible to combine the relative time in the intelligent data and the local time stamp to determine, specifically:
  • the generated intelligent data also includes relative time.
  • the acquisition of the relative time includes: expressing the time for generating the intelligent data as T1, expressing the time for collecting the corresponding video data as T2, and using the difference between T1 and T2 as the relative time; Ground, for a fixed smart camera, the relative time is a fixed value.
  • the storage component When storing intelligent data, the storage component also stores the stored smart data with a local timestamp when it is stored. Determining, according to the read intelligent data, a time point that satisfies the retrieval rule includes:
  • Determining the intelligent data satisfying the retrieval rule from the read intelligent data extracting the relative time and the local timestamp from the determined intelligent data, and adding the local timestamp plus the relative time to obtain an absolute time point, wherein the absolute time point is The determined point in time at which the search rule is met.
  • the point in time at which the retrieval rule is satisfied is determined based on the read intelligent data, which may be specifically performed by an intelligent computing component in the storage component.
  • Example 1 the target cross-line:
  • the thick line is the user's drawing on the playback screen.
  • the search rule the moving target moves from the left side of A1A2 to the right side and passes through B1B2; the thin line part of the figure is the alarm rule: the moving target moves from the left side of M1M2 to the right side.
  • the intelligent computing component combines the motion trajectory information contained in the intelligent data to perform geometric calculation, and determines whether the target that satisfies the alarm rule (whose motion coordinates can be learned through the motion trajectory information) also satisfies the retrieval rule, and if so, extracts the corresponding intelligence
  • step 407 is performed; otherwise, the extraction of the time point is not performed.
  • the user draws on the playback screen only the line drawn about the search rule is displayed; in Fig. 5, the line of the search rule and the alarm rule are simultaneously displayed for easy understanding and visualization. An example of a target U1 span is shown in the figure.
  • the dotted line is the user's drawing on the playback screen.
  • the search rule the moving target enters the dotted line frame from the outside of the dotted line frame; the solid line part in the figure is the alarm rule: the moving target enters the solid line frame from the solid line frame; the intelligent computing component Perform geometric calculation to determine whether the target meeting the alarm rule also satisfies the retrieval rule. If yes, extract the time point in the corresponding intelligent data, and perform step 407; otherwise, the time point extraction is not performed.
  • the motion trajectory is in the shadow portion of the target while satisfying the retrieval rules.
  • the dotted line is the user's drawing on the playback screen.
  • the search rule the moving target enters the dotted line box from the outside of the dotted line frame and the items are left behind; the solid line part in the figure is the alarm rule: the moving target enters the solid line frame from the solid line frame. And the item is omitted; the intelligent computing component performs geometric calculation to determine whether the target that satisfies the alarm rule also satisfies the retrieval rule, and if so, extracts the time point in the corresponding intelligent data, and performs step 407; otherwise, the time point is not extracted.
  • the search rule the moving target enters the dotted line box from the outside of the dotted line frame and the items are left behind
  • the solid line part in the figure is the alarm rule: the moving target enters the solid line frame from the solid line frame. And the item is omitted; the intelligent computing component performs geometric calculation to determine whether the target that satisfies the alarm rule also satisfies the retrieval rule, and if so, extract
  • the motion trajectory is in the shadow portion of the target while satisfying the retrieval rule; in the figure, the moving object A carries the item B into the area, and the item B is omitted.
  • the missing items of the moving target can be realized by the existing image recognition side technology; for example, image feature recognition is performed, if the image features are greatly changed after the target leaves the position range, and the changed image features are matched within the position range, then it is determined Some items are missing.
  • Step 407 The storage component converts the extracted time point into a time segment including the time point, and feeds the time segment to the platform server.
  • the time point is pushed back and forth for a period of time (the time before and after the alarm event is delayed), and converted into a time segment. For example, push the time point forward and backward for 5 seconds.
  • Relative_time relative time
  • Absolute_time absolute time point
  • Pre_time the intelligent alarm advance time, the time period during which the alarm occurs
  • Delay_time the intelligent alarm delay time, the time period when the alarm occurs, and the time delay is extended
  • the merging of the time segments may also be performed in advance; specifically, if the adjacent time segments overlap, the corresponding time segments are merged.
  • Step 408 The platform server plays the video data corresponding to the time segment.
  • only the video data for the retrieved time segment pair may be played, or the playback may be concentrated (fast and slow) according to the search result.
  • intelligent data can be extracted concurrently from multiple storage components at a high speed, and the efficiency of intelligent computing is also Because parallel computing has been greatly improved; the performance and efficiency of stand-alone servers cannot be compared with cloud storage due to the limitations of disk IO and hardware resources of a single device.
  • the application directly embeds the intelligent computing component in the storage component of the cloud storage, saves the disk overhead and network bandwidth pressure of the video analysis server, and supports real-time intelligent retrieval and intelligent playback, which can well solve the defects of the prior art.
  • the clustering characteristics of cloud storage and the advantages of data decentralized storage make the interference of intelligent data writing to the normal video writing to be minimized, and the single point of failure problem in the intelligent retrieval process is well solved; more importantly Yes, the efficiency of intelligent data extraction and intelligent computing is unmatched by any other stand-alone server (for the prior art assisted by intelligent data extraction and intelligent computing by a separate video analytics server).
  • the smart camera periodically collects traffic information to obtain traffic parameters.
  • the smart camera encapsulates the traffic parameters into the data stream when the video camera and the smart data are encapsulated. Giving a frame analysis component to the cloud storage system;
  • Step 103 When the frame analysis component decapsulates the received data stream, in addition to obtaining the video data and the intelligent data, the traffic parameters are obtained, and the traffic parameters are stored in the storage component;
  • the storage component also sends the storage address information of the traffic parameters to the index server for recording.
  • Traffic information such as: lane speed, number of small cars, number of medium vehicles, number of heavy vehicles, lane status, length of congestion, etc.
  • traffic information can be obtained; the intelligent camera periodically recognizes and analyzes the collected video images to obtain traffic correspondence. Traffic parameters at each time point.
  • the storage address information recorded by the index server includes the encoder identifier, the address information, and the time point; the smart cameras corresponding to the different encoder identifiers are associated with different lanes.
  • the traffic parameters stored in the storage component can be retrieved according to requirements, and the corresponding video data is browsed. specifically:
  • the platform server receives the traffic parameter request and sends the request to the index server, where the traffic parameter request includes an encoder identifier, a time range, and a traffic retrieval rule;
  • the index server queries the storage address information of the corresponding traffic parameter according to the encoder identifier and the time range in the traffic parameter request, and sends a traffic parameter request to the corresponding storage component according to the storage address information;
  • the storage component receives the traffic parameter request, reads the corresponding traffic parameter, and invokes the computing component to determine a time point that satisfies the traffic retrieval rule according to the read traffic parameter;
  • the storage component converts the extracted time point into a time segment containing the time point, and feeds the time segment to the platform server;
  • the platform server plays the video data corresponding to the time segment.
  • the platform server calls the traffic parameter retrieval interface of the cloud storage, inputs the lane (encoder ID), start time, end time and retrieval rules of the traffic bayonet (for example, 1000 small cars, 300 medium cars, and heavy vehicles 50) The vehicle, the state of the lane is blocked, the length of the blockage is 15 meters, etc.).
  • the search rule is the specific value of the corresponding traffic item.
  • the cloud storage returns the retrieved time segment to the platform server, and the platform server plays back according to the retrieval result, and the corresponding real traffic parameters can be superimposed on the playback screen.
  • the traffic information is periodically counted, compressed by the front end, and then stored in the code stream, and distributed to the storage component of the cloud storage system.
  • Save video analysis server reduce storage server disk overhead and network bandwidth pressure;
  • cloud storage cluster characteristics and data decentralized storage characteristics solve the single point of failure of the retrieval service; most importantly, traffic parameters
  • the retrieval efficiency is unmatched by any other stand-alone server. In this way, municipal traffic planners do not need to keep watching digital statistical information, and do not need to keep an eye on the video screen; just look at the traffic jams that they care about, just enter the query conditions they are interested in.
  • Video images that meet the search criteria are immediately available, and the images are presented in front of you. It has great reference significance for the widening of traffic jam roads, the adjustment of traffic light duration, and the limitation of various types of vehicles.
  • FIG. 8 is a schematic structural diagram of an intelligent processing system for video data of the present application.
  • the system may include a smart camera (smart IPC), a frame analysis component, an index server, and N storage components, where the N storage components are respectively: storage a component 1, a storage component 2, a storage component N, wherein the frame analysis component, the index server, and the N storage components are located in a cloud storage system (referred to as cloud storage);
  • cloud storage cloud storage system
  • the smart camera sets an alarm rule; the smart camera performs video data collection, and analyzes the collected video data in real time. If the alarm rule is met, the smart data is generated, and the smart data includes the encoder identifier and the motion track information; The data and the intelligent data are encapsulated together into a data stream and sent to the frame analysis component;
  • the frame analysis component decapsulates the received data stream, obtains video data and intelligent data, and stores video data and smart data in the storage component respectively;
  • the storage component sends the storage address information of the video data and the smart data to the index server after storing the video data and the smart data;
  • the index server separately records the received storage address information about the video data and the smart data.
  • the system may further include a platform server, receiving a playback request, and sending the request to the index server, where the playback request includes an encoder identifier and a playback time range; and receiving video data fed back by the storage component, playing the video data, Receiving a retrieval task during playback, and sending the retrieval task to the index server, the retrieval task includes an encoder identifier, a retrieval time range, and a retrieval rule; and receiving a time segment fed back by the storage component, and performing video data corresponding to the time segment Play
  • the indexing server queries the storage address information of the corresponding video data according to the received playback request, sends an extraction request to the corresponding storage component according to the storage address information, and further receives a retrieval task from the platform server, according to the encoding in the retrieval task.
  • the device identification and retrieval time range query the storage address information of the corresponding intelligent data, and send the storage address information to the corresponding storage component according to the storage address information. Retrieve request;
  • the storage component receives a playback request from the index server, reads corresponding video data according to the playback request, and sends the corresponding video data to the platform server; and further receives a retrieval task from the index server, and reads and retrieves the encoder in the task Identifying and retrieving intelligent data corresponding to the time range; determining a time point that satisfies the retrieval rule according to the read intelligent data; converting the extracted time point into a time segment including the time point, and feeding the time segment to the platform server.
  • the platform server receives the location range information drawn by the user on the pause screen during the playing process, and receives the motion change rule input on the rule setting interface.
  • the smart camera further acquires a relative time, and the relative time is included in the smart data, and the relative time is a difference between a time when the smart data is generated and a time when the corresponding video data is collected;
  • the storage component when storing the intelligent data, also writes a local timestamp for storing the stored smart data; and determining, according to the read intelligent data, a time point that satisfies the retrieval rule: determining that the retrieval is satisfied from the read intelligent data The intelligent data of the rule; extracting the relative time and the local timestamp from the determined intelligent data, and adding the local timestamp to the relative time to obtain an absolute time point, where the absolute time point is the determined time point that satisfies the retrieval rule.
  • the smart camera periodically collects traffic information to obtain traffic parameters; the smart camera also encapsulates traffic parameters into the data stream;
  • the frame analysis component obtains traffic parameters and stores traffic parameters in the storage component when decapsulating the received data stream;
  • the storage component further sends storage address information of the traffic parameter to the index server;
  • the index server records the received storage address information about the traffic parameters.
  • the storage address information recorded by the index server includes an encoder identifier, address information, and a time point;
  • the system further includes a platform server, receiving a traffic parameter request, and transmitting the request to the index server, where the traffic parameter request includes an encoder identifier, a time range, and a traffic retrieval rule; and receiving a time segment fed back by the storage component, corresponding to the time segment Video data is played;
  • the index server queries according to the encoder identifier and the time range in the traffic parameter request Corresponding storage parameter information of the traffic parameter, and sending a traffic parameter request to the corresponding storage component according to the storage address information;
  • the storage component receives a traffic parameter request and reads a corresponding traffic parameter; the calling computing component determines a time point that satisfies the traffic retrieval rule according to the read traffic parameter; and converts the extracted time point into a time segment including the time point, The time segment is fed back to the platform server.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Databases & Information Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

本申请公开了视频数据的智能处理方法及系统,其中,该方法在智能摄像机设置告警规则;该方法包括:智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件;帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。本申请方案能够实时对采集的视频数据进行智能处理。

Description

视频数据的智能处理方法及系统
本申请要求于2015年1月26日提交中国专利局、申请号为201510037009.4发明名称为“视频数据的智能处理方法及系统”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及视频处理技术,尤其涉及视频数据的智能处理方法及系统。
背景技术
现有视频数据的智能处理方案中:各网络摄像机(IPC,IP camera)进行视频数据的采集,将视频流发送到网络存储服务器进行存储。
而后,用户便可从存储服务器读取视频数据,进行播放。在播放过程中,可设置特征要求,以结合智能信息提取满足需求的视频数据进行播放,进而提高视频查找效率,实现智能处理。该特征要求为图像中运动目标的特征信息,如图像中有运动车辆的所有视频数据。
现有视频数据的智能处理方案具体采用如下方式协助实现:
配置独立的视频分析服务器,视频分析服务器设置特征规则,如所有视频图像中出现运动车辆的数据。各IPC将视频流存储到存储服务器后,由视频分析服务器周期性地从存储服务器读取视频数据,进行分析,在满足特征规则时,生成并存储智能信息;智能信息中记载了相应视频数据所满足特征规则的参数。而后,当需要结合特征规则进行视频数据的播放时,根据智能信息便可确定出满足要求的视频数据,进行播放。
现有方案采用后分析方式,待IPC将视频流存储到存储服务器,再周期性地进行智能数据的分析,只能对历史流进行处理;而IPC进行数据存储也具有一定的周期性,并非实时存储,对于IPC当前已经采集到,但还未存储到存储服务器的视频数据,则无法进行分析。并且对所有IPC视频流的分析由独立的视频分析服务器完成,其工作量巨大,耗时长,加大了视频分析服务器的技术难度。可见,现有对视频数据和智能数据进行存储的方案具有时效性差的缺陷。
发明内容
本申请提供了一种视频数据的智能处理方法,该方法能够实时对采集的视频数据进行智能处理。
本申请提供了一种视频数据的智能处理系统,该系统能够实时对采集的视频数据进行智能处理。
一种视频数据的智能处理方法,在智能摄像机设置告警规则的情况下;该方法包括:
智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;
智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件;
帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;
存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。
一种视频数据的智能处理系统,该系统包括智能摄像机、帧分析组件、索引服务器和多个存储组件;
所述智能摄像机,设置告警规则;智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;将视频数据和智能数据一起封包为数据流,发给所述帧分析组件;
所述帧分析组件,对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;
所述存储组件,进行视频数据和智能数据的存储后,将视频数据和智能数据的存储地址信息发送给索引服务器;
所述索引服务器,对接收的关于视频数据和智能数据的存储地址信息分别进行记录。
从上述方案可以看出,本申请在智能摄像机设置告警规则的情况下,具体地:智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件;帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。
本申请由智能摄像机对采集的视频数据实时进行分析,并且采用云存储方式,将视频数据与分析得到的智能数据一起发送给云存储系统进行分别存储;从而实现了对采集的视频数据进行智能处理;并且将现有技术由独立的视频分析服务器完成的智能数据处理工作分摊到各智能摄像机完成,其速度快,也大大降低了实现难度。
附图说明
为了更清楚地说明本申请实施例和现有技术的技术方案,下面对实施例和现有技术中所需要使用的附图作简单地介绍,显而易见地,下面描述中的附图仅仅是本申请的一些实施例,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他的附图。
图1为本申请视频数据的智能处理方法的示意性流程图;
图2为本申请存储组件内存储的视频数据和智能数据的示意图实例;
图3为本申请视频数据的智能存储方法的示意图实例;
图4为本申请视频数据的智能回放方法的流程图实例;
图5为本申请智能回放中目标跨线检索的示意图实例;
图6为本申请智能回放中目标入侵区域检索的示意图实例;
图7为本申请智能回放中目标物品遗漏检索的示意图实例;
图8为本申请视频数据的智能处理系统的结构示意图实例。
具体实施方式
为使本申请的目的、技术方案和优点更加清楚明白,下面结合实施例和附图,对本申请进一步详细说明。
本申请在智能摄像机设置告警规则的情况下,由智能摄像机对采集的视频数据实时进行分析,并且采用云存储方式,将视频数据与分析得到的智能数据一起发送给云存储系统进行分别存储;从而实现了对采集的视频数据进行智能处理。
参见图1,为本申请视频数据的智能处理方法的示意性流程图,目前进行视频数据采集的摄像机都为智能摄像机,根据智能摄像机的特点可对其进行功能扩展,本申请在智能摄像机设置告警规则,由智能摄像机根据告警规则对采集的视频数据进行实时分析,生成智能数据。
图1的流程可以包括以下步骤:
步骤101,智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息。
告警规则包括目标运动信息和目标特征信息中的至少一个,所述目标运动信息包括位置范围信息和运动变化信息。
所述目标运动信息包括位置范围信息和运动变化信息;位置范围信息具体如在视频监控范围内确定出的圆形区域、方形区域等,运动变化信息具体如目标跨线、目标入侵、目标物品遗留等。
根据需要,告警规则还可包括目标特征信息,如人的性别、年龄段、眼间距、是否戴眼镜等;车牌的颜色及倾斜角、车身的颜色等。
编码器标识即智能摄像机标识,运动轨迹信息为满足告警规则的目标的运动轨迹信息,也即是在位置范围内的运动轨迹信息。例如:目标运动信息中的位置范围信息为正方形区域,用A表示,运动变化信息为运动目标从A外进入A内;则生成的智能数据中,运动轨迹信息为在位置范围A内的运动坐标信息。告警规则内包含的内容可根据需要设置,例如,若告警规则中还包含目标特征信息,则生成的智能数据中还包括目标特征参数;举例说明, 目标特征信息为人的性别,生成的智能数据中还包括目标特征参数,若该目标特征参数为0,表示入侵区域的目标为女性,为1,表示入侵区域的目标为男性。
对视频数据进行分析,识别出视频图像中的运动信息和特征信息,为已有技术;例如,可识别出目标的运动轨迹,以及运动目标的特征信息,包括人的性别、年龄段、眼间距、是否戴眼镜等;车牌的颜色及倾斜角、车身的颜色等。
步骤102,智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件。
将实时采集的视频数据和实时生成的智能数据,一起进行封包,得到数据流(PS,Program Stream)。
步骤103,帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据。
得到视频数据和智能数据,将视频数据和智能数据进行分别存储。
具体地,视频数据和智能数据以数据块的形式,分散存储在云存储系统的各存储组件中。索引服务器内存储了所有存储组件中各数据块的使用状态。当有数据需要写入时,帧分析组件首先向索引服务器申请空闲数据块,索引服务器根据一定的分散存储策略挑选存储组件,然后在对应的存储组件上选择合适的空闲数据块,将选择的数据块地址信息反馈给帧分析组件。帧分析组件根据数据块地址信息将视频数据和智能数据分别写入对应的数据块中。
参见图2,为本申请存储组件内存储的视频数据和智能数据的示意图实例,虚线块为视频数据块,实线块为智能数据块。
步骤104,存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。
在索引服务器中,关于视频数据的存储地址信息和关于智能数据的存储地址信息进行分别存储。
关于视频数据的存储地址信息包含地址信息、编码器标识(ID)、时间 点等,编码器标识(ID)用于表示对应的智能摄像机,时间点用于表示视频数据对应的时间。
关于智能数据的存储地址信息也包含地址信息、编码器标识(ID)、时间点等。其中的地址信息包含存储组件信息以及数据块信息。其中的时间点表示存储智能数据的时间;具体地,存储组件在存储智能数据时,还为存储的智能数据打上存储时的本地时间戳。
如图3,为本申请视频数据的智能存储方法的示意图实例,给出了进行视频数据和智能数据存储的示意图实例,该实例中,用户访问智能IPC网页,通过智能IPC平台为智能IPC设置告警规则,智能IPC存储告警规则;而后智能IPC集合告警规则进行数据存储。
本申请在智能摄像机设置告警规则,具体地:智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件;帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。本申请由智能摄像机对采集的视频数据实时进行分析,并且采用云存储方式,将视频数据与分析得到的智能数据一起发送给云存储系统进行分别存储;从而实现了对采集的视频数据进行智能处理。并且将现有技术由独立视频分析服务器完成的智能数据处理工作分摊到各智能摄像机完成,其速度快,也大大降低了实现难度。
存储视频数据和智能数据之后,可根据需要对视频数据进行回放;并且在回放过程中,可基于智能数据对满足条件要求的视频数据进行快速提取和播放。下面结合图4的流程进行实例说明,图4为本申请视频数据的智能回放方法的流程图实例,其可以包括以下步骤:
步骤401,平台服务器接收回放请求,发送给索引服务器,所述回放请求包含编码器标识、回放时间范围。
平台服务器对外提供视频数据的索引、回放等业务,需要时,用户对访 问平台服务器,输入回放请求。
步骤402,索引服务器根据回放请求查询出相应视频数据的存储地址信息,根据存储地址信息向相应的存储组件发送提取请求。
根据回放请求包含的编码器标识、回放时间范围,便可查询出满足条件的存储地址信息。
步骤403,存储组件根据提取请求读取对应的视频数据,发送给平台服务器。
提取请求包含地址信息,存储组件接收提取请求后,根据地址信息读取对应的视频数据。
步骤404,平台服务器播放视频数据,在播放过程中接收检索任务,将检索任务发送给索引服务器,所述检索任务包括编码器标识、检索时间范围和检索规则,所述检索规则包括目标运动信息,所述目标运动信息包括位置范围信息和运动变化信息。
在播放过程中,用户可根据需要输入检索任务,其输入方式有多种,例如进入规则设置界面,根据提示输入检索任务的各项内容。还可以,结合屏幕绘制和界面设置,进行检索任务的输入,具体地,可采用手指触摸屏幕、触摸笔或通过鼠标在屏幕上进行绘制,画出位置范围信息;下面例举两种实现方式:
方式一:
在播放过程中,接收用户在暂停画面绘制出的位置范围信息;并接收在规则设置界面输入的运动变化信息。
方式二:在播放过程中,接收用户在暂停画面绘制出的位置范围信息;并接收在规则设置界面输入的运动变化信息和目标特征信息;所述运动变化信息包括目标跨线信息、目标入侵信息或目标物品遗留信息等。
步骤405,索引服务器根据检索任务中的编码器标识和检索时间范围查询出相应智能数据的存储地址信息,根据存储地址信息向对应的存储组件发送检索请求。
步骤406,存储组件接收检索请求,读取对应的智能数据;根据读取的智能数据确定满足检索规则的时间点。
检索请求中包含地址信息,根据地址信息便可读取对应的智能数据。智能数据中包含对应的时间点,确定出满足检索规则的智能数据,进而便可确定出满足检索规则的时间点。
满足要求的时间点可根据智能数据的本地时间戳直接得到,这种情形中,时间点也就是本地时间戳。还可以,结合智能数据中的相对时间以及本地时间戳进行确定,具体地:
生成的智能数据中还包含相对时间,相对时间的获取包括:将生成智能数据的时间表示为T1,将采集相应视频数据的时间表示为T2,将T1与T2两者之差作为相对时间;通常地,针对某一固定智能摄像机,相对时间为固定值。存储组件在存储智能数据时,还为存储的智能数据打上存储时的本地时间戳。所述根据读取的智能数据确定出满足检索规则的时间点包括:
从读取的智能数据中确定满足检索规则的智能数据;从确定的智能数据中提取相对时间和本地时间戳,将本地时间戳加上相对时间得到绝对时间点,该绝对时间点则为所述确定出的满足检索规则的时间点。
根据读取的智能数据确定满足检索规则的时间点,可具体由存储组件中的智能计算组件完成。
检索规则如图5-7所示的实例,下面分别进行说明:
实例一、目标跨线:
图5中,粗线部分为用户在回放画面的绘制,检索规则:运动目标从A1A2的左边移动至右边,且经过B1B2;图中细线部分为告警规则:运动目标从M1M2的左边移动至右边,且经过N1N2;智能计算组件结合智能数据包含的运动轨迹信息进行几何计算,判定满足告警规则的目标(其运动坐标可通过运动轨迹信息获知)是否也满足检索规则,如果是,则提取相应智能数据中的时间点,执行步骤407;否则不进行时间点的提取。用户在回放画面进行绘制时,只显示关于检索规则的画线;图5中同时显示检索规则和告警规则的画线,是为了便于理解及直观化。图中示出了目标U1跨线的实例。
实例二、目标入侵区域:
图6中,虚线部分为用户在回放画面的绘制,检索规则:运动目标从虚线框外进入虚线框;图中实线部分为告警规则:运动目标从实线框外进入实线框;智能计算组件进行几何计算,判定满足告警规则的目标是否也满足检索规则,如果是,则提取相应智能数据中的时间点,执行步骤407;否则不进行时间点的提取。如图6的实例,运动轨迹处于阴影部分的目标,同时满足检索规则。
实例三、目标物品遗漏:
图7中,虚线部分为用户在回放画面的绘制,检索规则:运动目标从虚线框外进入虚线框且遗留了物品;图中实线部分为告警规则:运动目标从实线框外进入实线框且遗漏了物品;智能计算组件进行几何计算,判定满足告警规则的目标是否也满足检索规则,如果是,则提取相应智能数据中的时间点,执行步骤407;否则不进行时间点的提取。如图7的实例,运动轨迹处于阴影部分的目标,同时满足检索规则;图中运动目标A携带物品B进入区域,遗漏了物品B。关于运动目标遗漏物品可通过现有的图像识别方技术实现;如,进行图像特征识别,如果目标离开位置范围后,图像特征有很大变化,变化的图像特征在位置范围内匹配上,则确定有物品遗漏。
步骤407,存储组件将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器。
根据设置的规则将时间点前后推一个时间段(告警事件前后延的时间),转换为时间片段。例如,将时间点向前、后分别推5秒。
local_time:智能数据本地时间戳;
relative_time:相对时间;
absolute_time:绝对时间点;
pre_time:智能报警提前时间,报警发生的时刻向前推的时间段;
delay_time:智能报警延迟时间,报警发生的时刻向后延的时间段;
告警发生的绝对时间点:absolute_time=local_time+relative_time
告警时间发生的绝对时间片段:
[absolute_time-pre_time,absolute_time+delay_time]。
进一步地,向平台服务器反馈时间片段时,还可以预先进行时间片段的合并;具体地,如果相邻的时间片段有重叠,则将对应的时间片段进行合并。
步骤408,平台服务器将时间片段对应的视频数据进行播放。
具体地,可以只播放检索到的时间片段对用的视频数据,也可以根据检索结果浓缩播放(快慢放)。
本申请中,由于智能数据存储的分散性和智能计算组件(智能计算组件跟存储组件部署在一起)部署的分散性,使智能数据可以高倍速从多台存储组件并发提取,智能计算的效率也因为并行计算有了很大的提升;单机服务器由于磁盘IO和单台设备硬件资源的局限,性能和效率无法跟云存储相比拟。
本申请在云存储的存储组件内部直接嵌入智能计算组件,节省视频分析服务器的磁盘开销和网络带宽压力,且支持实时智能检索和智能回放,可以很好的解决现有技术的缺陷。另外,云存储的集群特性和数据分散存储的优势,使得智能数据的写入对正常视频写入的干扰降到最低,且很好的解决了智能检索过程中的单点故障问题;更重要的是,智能数据的提取和智能计算的效率是其他任何单机服务器(针对现有技术由一台独立的视频分析服务器协助智能数据的提取和智能计算)无法比拟的。
进一步地,本申请中,智能摄像机还周期性地对交通信息进行统计,得到交通参数;图1流程102智能摄像机对视频数据和智能数据进行封包时,还将交通参数封包到数据流中,发给云存储系统中的帧分析组件;
步骤103帧分析组件对接收的数据流进行解封装包时,除了得到视频数据和智能数据外,还得到交通参数,在存储组件中存储交通参数;
存储组件还将交通参数的存储地址信息发送到索引服务器进行记录。
交通信息例如:车道速度、小型车数量、中型车数量、重型车数量、车道状态、堵塞长度等。通过对采集的视频图像进行识别分析,便可获知到交通信息;智能摄像机周期性地对采集的视频图像识别分析,以获取交通对应 各时间点的交通参数。索引服务器记录的存储地址信息包含编码器标识、地址信息和时间点;不同编码器标识对应的智能摄像机,与不同的车道关联。
而后,可根据需求对存储组件中存储的交通参数进行检索,并浏览相应的视频数据。具体地:
平台服务器接收交通参数请求,发送给索引服务器,所述交通参数请求包含编码器标识、时间范围和交通检索规则;
索引服务器根据交通参数请求中的编码器标识和时间范围查询出相应交通参数的存储地址信息,根据存储地址信息向对应的存储组件发送交通参数请求;
存储组件接收交通参数请求,读取对应的交通参数;调用计算组件根据读取的交通参数确定满足交通检索规则的时间点;
存储组件将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器;
平台服务器将时间片段对应的视频数据进行播放。
进行检索时,平台服务器调用云存储的交通参数检索接口,输入交通卡口的车道(编码器ID)、开始时间、结束时间和检索规则(比如小型车1000辆、中型车300辆、重型车50辆、车道状态为阻塞、阻塞长度15米等)。检索规则为对应交通项目的具体数值。
云存储将检索的时间片段返回给平台服务器,平台服务器根据检索结果进行回放,对应的真实的交通参数可在播放画面叠加显示。
本申请对交通参数进行智能处理的方案中,周期性地统计交通信息,由前端将其压缩后打在码流里,分散存储到云存储系统的存储组件中。节省视频分析服务器、降低存储服务器的磁盘开销和网络带宽压力;另外,云存储的集群特性和数据分散存储的特性,很好的解决了检索业务的单点故障问题;最重要的是,交通参数的检索效率是其他任何单机服务器无法比拟的。这样,市政交通规划人员,不需要持续看数字统计信息,也不需要一直盯着视频画面看;只需要查看自己关注的交通卡口,只需要输入自己关注的查询条件便 可立即获取到满足检索条件的视频画面,图文并茂的呈现在眼前。对于交通卡口道路的拓宽、红绿灯时长的调整、各种类型车辆分时段的限行等有重大的参考意义。
参见图8,为本申请视频数据的智能处理系统的结构示意图实例,该系统可以包括智能摄像机(智能IPC)、帧分析组件、索引服务器和N个存储组件,其中N个存储组件分别为:存储组件1、存储组件2……存储组件N,其中,所述帧分析组件、所述索引服务器和所述N个存储组件位于云存储系统(简称云存储)中;
所述智能摄像机,设置告警规则;智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;将视频数据和智能数据一起封包为数据流,发给所述帧分析组件;
所述帧分析组件,对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;
所述存储组件,进行视频数据和智能数据的存储后,将视频数据和智能数据的存储地址信息发送给索引服务器;
所述索引服务器,对接收的关于视频数据和智能数据的存储地址信息分别进行记录。
较佳地,该系统还可以包括平台服务器,接收回放请求,发送给所述索引服务器,所述回放请求包含编码器标识、回放时间范围;还接收存储组件反馈的视频数据,播放视频数据,在播放过程中接收检索任务,将检索任务发送给所述索引服务器,所述检索任务包括编码器标识、检索时间范围和检索规则;还接收存储组件反馈的时间片段,将时间片段对应的视频数据进行播放;
所述索引服务器,根据接收的回放请求查询出相应视频数据的存储地址信息,根据存储地址信息向相应的存储组件发送提取请求;还接收来自所述平台服务器的检索任务,根据检索任务中的编码器标识和检索时间范围查询出相应智能数据的存储地址信息,根据存储地址信息向对应的存储组件发送 检索请求;
所述存储组件,接收来自所述索引服务器的回放请求,根据回放请求读取对应的视频数据,发送给平台服务器;还接收来自所述索引服务器的检索任务,读取与检索任务中的编码器标识和检索时间范围对应的智能数据;根据读取的智能数据确定满足检索规则的时间点;将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器。
较佳地,所述平台服务器,在播放过程中,接收用户在暂停画面绘制出的位置范围信息;并接收在规则设置界面输入的运动变化规则。
较佳地,所述智能摄像机,还获取相对时间,将相对时间包含在智能数据中,相对时间为生成智能数据的时间与采集相应视频数据的时间两者之差;
所述存储组件,在存储智能数据时还为存储的智能数据打上存储时的本地时间戳;根据读取的智能数据确定出满足检索规则的时间点时:从读取的智能数据中确定满足检索规则的智能数据;从确定的智能数据中提取相对时间和本地时间戳,将本地时间戳加上相对时间得到绝对时间点,该绝对时间点则为所述确定出的满足检索规则的时间点。
较佳地,所述智能摄像机,还周期性地对交通信息进行统计,得到交通参数;智能摄像机还将交通参数封包到数据流中;
所述帧分析组件,对接收的数据流进行解封装包时,还得到交通参数,在存储组件中存储交通参数;
所述存储组件,还将交通参数的存储地址信息发送到所述索引服务器;
所述索引服务器,对接收的关于交通参数的存储地址信息进行记录。
较佳地,所述索引服务器记录的存储地址信息包含编码器标识、地址信息和时间点;
该系统还包括平台服务器,接收交通参数请求,发送给所述索引服务器,所述交通参数请求包含编码器标识、时间范围和交通检索规则;还接收存储组件反馈的时间片段,将时间片段对应的视频数据进行播放;
所述索引服务器,根据交通参数请求中的编码器标识和时间范围查询出 相应交通参数的存储地址信息,根据存储地址信息向对应的存储组件发送交通参数请求;
所述存储组件,接收交通参数请求,读取对应的交通参数;调用计算组件根据读取的交通参数确定满足交通检索规则的时间点;将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器。
以上所述,仅为本申请的具体实施方式,但本申请的保护范围并不局限于此,任何熟悉本技术领域的技术人员在本申请揭露的技术范围内,可轻易想到的变化或替换,都应涵盖在本申请的保护范围之内。因此,本申请的保护范围应以权利要求的保护范围为准。

Claims (14)

  1. 一种视频数据的智能处理方法,其特征在于,在智能摄像机设置告警规则的情况下,该方法包括:
    智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;
    智能摄像机将视频数据和智能数据一起封包为数据流,发给云存储系统中的帧分析组件;
    帧分析组件对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;
    存储组件将视频数据和智能数据的存储地址信息发送到索引服务器进行分别记录。
  2. 如权利要求1所述的方法,其特征在于,所述告警规则包括目标运动信息和目标特征信息中的至少一个,所述目标运动信息包括位置范围信息和运动变化信息。
  3. 如权利要求1所述的方法,其特征在于,索引服务器记录的存储地址信息包含编码器标识、地址信息和时间点,该方法还包括:
    平台服务器接收回放请求,发送给索引服务器,所述回放请求包含编码器标识、回放时间范围;
    索引服务器根据回放请求查询出相应视频数据的存储地址信息,根据存储地址信息向相应的存储组件发送提取请求;
    存储组件根据提取请求读取对应的视频数据,发送给平台服务器;
    平台服务器播放视频数据,在播放过程中接收检索任务,将检索任务发送给索引服务器,所述检索任务包括编码器标识、检索时间范围和检索规则;
    索引服务器根据检索任务中的编码器标识和检索时间范围查询出相应智能数据的存储地址信息,根据存储地址信息向对应的存储组件发送检索请求;
    存储组件接收检索请求,读取对应的智能数据;调用计算组件根据读取的智能数据确定满足检索规则的时间点;
    存储组件将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器;
    平台服务器将时间片段对应的视频数据进行播放。
  4. 如权利要求3所述的方法,其特征在于,所述检索规则包括目标运动信息,所述目标运动信息包括位置范围信息和运动变化信息;所述平台服务器在播放过程中接收检索任务包括:在播放过程中,接收用户在暂停画面绘制出的位置范围信息;并接收在规则设置界面输入的运动变化规则。
  5. 如权利要求3所述的方法,其特征在于,所述检索任务还包含目标特征信息;
    所述平台服务器在播放过程中接收检索任务包括:在播放过程中,接收用户在规则设置界面输入的目标特征信息。
  6. 如权利要求3所述的方法,其特征在于,生成的智能数据中还包含相对时间,相对时间为生成智能数据的时间与采集相应视频数据的时间两者之差;存储组件在存储智能数据时,还为存储的智能数据打上存储时的本地时间戳;
    所述根据读取的智能数据确定出满足检索规则的时间点包括:
    从读取的智能数据中确定满足检索规则的智能数据;从确定的智能数据中提取相对时间和本地时间戳,将本地时间戳加上相对时间得到绝对时间点,该绝对时间点则为所述确定出的满足检索规则的时间点。
  7. 如权利要求1所述的方法,其特征在于,该方法还包括:
    智能摄像机周期性地对交通信息进行统计,得到交通参数;智能摄像机还将交通参数封包到数据流中;
    所述帧分析组件对接收的数据流进行解封装包时,还得到交通参数,在存储组件中存储交通参数;
    存储组件还将交通参数的存储地址信息发送到索引服务器进行记录。
  8. 如权利要求7所述的方法,其特征在于,索引服务器记录的存储地址信息包含编码器标识、地址信息和时间点,该方法还包括:
    平台服务器接收交通参数请求,发送给索引服务器,所述交通参数请求包含编码器标识、时间范围和交通检索规则;
    索引服务器根据交通参数请求中的编码器标识和时间范围查询出相应交通参数的存储地址信息,根据存储地址信息向对应的存储组件发送交通参数请求;
    存储组件接收交通参数请求,读取对应的交通参数;调用计算组件根据读取的交通参数确定满足交通检索规则的时间点;
    存储组件将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器;
    平台服务器将时间片段对应的视频数据进行播放。
  9. 一种视频数据的智能处理系统,其特征在于,该系统包括智能摄像机、帧分析组件、索引服务器和多个存储组件;
    所述智能摄像机,设置告警规则;智能摄像机进行视频数据采集,实时对采集的视频数据进行分析,如满足告警规则,则生成智能数据,所述智能数据包含编码器标识和运动轨迹信息;将视频数据和智能数据一起封包为数据流,发给所述帧分析组件;
    所述帧分析组件,对接收的数据流进行解封装包,得到视频数据和智能数据,在存储组件中分别存储视频数据和智能数据;
    所述存储组件,进行视频数据和智能数据的存储后,将视频数据和智能数据的存储地址信息发送给索引服务器;
    所述索引服务器,对接收的关于视频数据和智能数据的存储地址信息分别进行记录。
  10. 如权利要求9所述的系统,其特征在于,所述索引服务器记录的存储 地址信息包含编码器标识、地址信息和时间点;
    该系统还包括平台服务器,接收回放请求,发送给所述索引服务器,所述回放请求包含编码器标识、回放时间范围;还接收存储组件反馈的视频数据,播放视频数据,在播放过程中接收检索任务,将检索任务发送给所述索引服务器,所述检索任务包括编码器标识、检索时间范围和检索规则;还接收存储组件反馈的时间片段,将时间片段对应的视频数据进行播放;
    所述索引服务器,根据接收的回放请求查询出相应视频数据的存储地址信息,根据存储地址信息向相应的存储组件发送提取请求;还接收来自所述平台服务器的检索任务,根据检索任务中的编码器标识和检索时间范围查询出相应智能数据的存储地址信息,根据存储地址信息向对应的存储组件发送检索请求;
    所述存储组件,接收来自所述索引服务器的回放请求,根据回放请求读取对应的视频数据,发送给平台服务器;还接收来自所述索引服务器的检索任务,读取与检索任务中的编码器标识和检索时间范围对应的智能数据;根据读取的智能数据确定满足检索规则的时间点;将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器。
  11. 如权利要求10所述的系统,其特征在于,所述平台服务器,在播放过程中,接收用户在暂停画面绘制出的位置范围信息;并接收在规则设置界面输入的运动变化规则。
  12. 如权利要求10所述的系统,其特征在于,所述智能摄像机,还获取相对时间,将相对时间包含在智能数据中,相对时间为生成智能数据的时间与采集相应视频数据的时间两者之差;
    所述存储组件,在存储智能数据时还为存储的智能数据打上存储时的本地时间戳;根据读取的智能数据确定出满足检索规则的时间点时:从读取的智能数据中确定满足检索规则的智能数据;从确定的智能数据中提取相对时间和本地时间戳,将本地时间戳加上相对时间得到绝对时间点,该绝对时间点则为所述确定出的满足检索规则的时间点。
  13. 如权利要求9所述的系统,其特征在于,
    所述智能摄像机,还周期性地对交通信息进行统计,得到交通参数;智能摄像机还将交通参数封包到数据流中;
    所述帧分析组件,对接收的数据流进行解封装包时,还得到交通参数,在存储组件中存储交通参数;
    所述存储组件,还将交通参数的存储地址信息发送到所述索引服务器;
    所述索引服务器,对接收的关于交通参数的存储地址信息进行记录。
  14. 如权利要求13所述的系统,其特征在于,所述索引服务器记录的存储地址信息包含编码器标识、地址信息和时间点;
    该系统还包括平台服务器,接收交通参数请求,发送给所述索引服务器,所述交通参数请求包含编码器标识、时间范围和交通检索规则;还接收存储组件反馈的时间片段,将时间片段对应的视频数据进行播放;
    所述索引服务器,根据交通参数请求中的编码器标识和时间范围查询出相应交通参数的存储地址信息,根据存储地址信息向对应的存储组件发送交通参数请求;
    所述存储组件,接收交通参数请求,读取对应的交通参数;调用计算组件根据读取的交通参数确定满足交通检索规则的时间点;将提取的时间点转换为包含该时间点的时间片段,将时间片段反馈给平台服务器。
PCT/CN2015/096817 2015-01-26 2015-12-09 视频数据的智能处理方法及系统 WO2016119528A1 (zh)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US15/537,462 US10178430B2 (en) 2015-01-26 2015-12-09 Intelligent processing method and system for video data
EP15879726.6A EP3253042B1 (en) 2015-01-26 2015-12-09 Intelligent processing method and system for video data

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201510037009.4A CN105898207B (zh) 2015-01-26 2015-01-26 视频数据的智能处理方法及系统
CN201510037009.4 2015-01-26

Publications (1)

Publication Number Publication Date
WO2016119528A1 true WO2016119528A1 (zh) 2016-08-04

Family

ID=56542354

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2015/096817 WO2016119528A1 (zh) 2015-01-26 2015-12-09 视频数据的智能处理方法及系统

Country Status (4)

Country Link
US (1) US10178430B2 (zh)
EP (1) EP3253042B1 (zh)
CN (1) CN105898207B (zh)
WO (1) WO2016119528A1 (zh)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180035071A (ko) * 2016-09-28 2018-04-05 한화테크윈 주식회사 데이터 분산 저장 방법 및 시스템
US11032584B2 (en) 2016-11-23 2021-06-08 Hangzhou Hikvision Digital Technology Co., Ltd. Picture storage method, apparatus and video monitoring system
CN113361332A (zh) * 2021-05-17 2021-09-07 北京中海前沿材料技术有限公司 视频数据的采集处理方法和装置

Families Citing this family (22)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105898207B (zh) * 2015-01-26 2019-05-10 杭州海康威视数字技术股份有限公司 视频数据的智能处理方法及系统
CN106850710B (zh) * 2015-12-03 2020-02-28 杭州海康威视数字技术股份有限公司 一种数据云存储系统、客户终端、存储服务器及应用方法
CN106534151B (zh) * 2016-11-29 2019-12-03 北京旷视科技有限公司 用于播放视频流的方法及装置
WO2019076076A1 (zh) * 2017-10-20 2019-04-25 杭州海康威视数字技术股份有限公司 模拟摄像机、服务器、监控系统和数据传输、处理方法
CN109698895A (zh) * 2017-10-20 2019-04-30 杭州海康威视数字技术股份有限公司 一种模拟摄像机、监控系统及数据发送方法
US11094191B2 (en) * 2018-04-27 2021-08-17 Iurii V. Iuzifovich Distributed safety infrastructure for autonomous vehicles and methods of use
CN110866427A (zh) * 2018-08-28 2020-03-06 杭州海康威视数字技术股份有限公司 一种车辆行为检测方法及装置
CN110876090B (zh) * 2018-09-04 2021-12-24 杭州海康威视数字技术股份有限公司 视频摘要回放方法、装置、电子设备及可读存储介质
CN111064984B (zh) * 2018-10-16 2022-02-08 杭州海康威视数字技术股份有限公司 一种视频帧的智能信息叠加显示方法、装置及硬盘录像机
CN112307830A (zh) * 2019-07-31 2021-02-02 北京博雅慧视智能技术研究院有限公司 一种数字视网膜海量目标检索及布控方法
CN110881141B (zh) * 2019-11-19 2022-10-18 浙江大华技术股份有限公司 视频展示方法和装置、存储介质及电子装置
CN111405222B (zh) * 2019-12-12 2022-06-03 杭州海康威视系统技术有限公司 视频告警方法、视频告警系统及告警图片的获取方法
CN111090565B (zh) * 2019-12-20 2021-09-28 上海有个机器人有限公司 一种机器人历史行为回放方法和系统
CN111104549A (zh) * 2019-12-30 2020-05-05 普联技术有限公司 一种检索视频的方法及设备
CN111444219A (zh) * 2020-03-30 2020-07-24 深圳天岳创新科技有限公司 一种基于分布式的数据处理方法、装置和电子设备
CN111860307A (zh) * 2020-07-17 2020-10-30 苏州企智信息科技有限公司 一种基于视频行为识别的后厨违规智能判断方法
CN112798979B (zh) * 2020-12-09 2024-05-14 国网辽宁省电力有限公司锦州供电公司 基于深度学习技术的变电站接地线状态检测系统及方法
CN113473166A (zh) * 2021-06-30 2021-10-01 杭州海康威视系统技术有限公司 一种数据存储系统及方法
CN113949719B (zh) * 2021-10-13 2023-07-28 政浩软件(上海)有限公司 一种基于5g通信的车载巡检方法及系统
CN113989839A (zh) * 2021-10-26 2022-01-28 浙江大学 一种基于时间戳的动物神经信号与行为视频的同步分析系统和方法
CN115798136A (zh) * 2023-01-30 2023-03-14 北京蓝色星际科技股份有限公司 一种安防设备告警信息处理方法及装置
CN117714814B (zh) * 2023-12-16 2024-05-17 浙江鼎世科技有限公司 一种基于智能策略的视频存储访问系统

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007221581A (ja) * 2006-02-17 2007-08-30 Toshiba Corp 監視システム及び画像処理装置
CN101448145A (zh) * 2008-12-26 2009-06-03 北京中星微电子有限公司 Ip摄像机和视频监控系统及ip摄像机的信号处理方法
CN102194320A (zh) * 2011-04-25 2011-09-21 杭州海康威视数字技术股份有限公司 高清网络智能摄像机及高清网络智能抓拍方法
CN202634594U (zh) * 2011-12-09 2012-12-26 中兴智能交通(无锡)有限公司 3g网络摄像机
CN103379266A (zh) * 2013-07-05 2013-10-30 武汉烽火众智数字技术有限责任公司 一种具有视频语义分析功能的高清网络摄像机

Family Cites Families (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5969755A (en) 1996-02-05 1999-10-19 Texas Instruments Incorporated Motion based event detection system and method
US20030025599A1 (en) * 2001-05-11 2003-02-06 Monroe David A. Method and apparatus for collecting, sending, archiving and retrieving motion video and still images and notification of detected events
US20050104958A1 (en) * 2003-11-13 2005-05-19 Geoffrey Egnal Active camera video-based surveillance systems and methods
US7746378B2 (en) 2004-10-12 2010-06-29 International Business Machines Corporation Video analysis, archiving and alerting methods and apparatus for a distributed, modular and extensible video surveillance system
US8471910B2 (en) * 2005-08-11 2013-06-25 Sightlogix, Inc. Methods and apparatus for providing fault tolerance in a surveillance system
US20090256910A1 (en) * 2006-01-27 2009-10-15 Ram Ganeshan Camera System to detect, monitor and report low visibility
US8013738B2 (en) * 2007-10-04 2011-09-06 Kd Secure, Llc Hierarchical storage manager (HSM) for intelligent storage of large volumes of data
CN101854516B (zh) * 2009-04-02 2014-03-05 北京中星微电子有限公司 视频监控系统、视频监控服务器及视频监控方法
KR101586699B1 (ko) * 2010-01-28 2016-01-19 한화테크윈 주식회사 네트워크 카메라 및 네트워크 카메라 운용 시스템 및 방법
US8503539B2 (en) * 2010-02-26 2013-08-06 Bao Tran High definition personal computer (PC) cam
US10645344B2 (en) * 2010-09-10 2020-05-05 Avigilion Analytics Corporation Video system with intelligent visual display
US20120173577A1 (en) * 2010-12-30 2012-07-05 Pelco Inc. Searching recorded video
US8743204B2 (en) * 2011-01-07 2014-06-03 International Business Machines Corporation Detecting and monitoring event occurrences using fiber optic sensors
CN107707929A (zh) * 2011-05-12 2018-02-16 索林科集团 视频分析系统
US9111147B2 (en) * 2011-11-14 2015-08-18 Massachusetts Institute Of Technology Assisted video surveillance of persons-of-interest
US10769913B2 (en) * 2011-12-22 2020-09-08 Pelco, Inc. Cloud-based video surveillance management system
CN102857741A (zh) * 2012-09-24 2013-01-02 天津市亚安科技股份有限公司 多方向监控区域预警定位监控装置
US10384642B2 (en) * 2013-07-17 2019-08-20 Conduent Business Services, Llc Methods and systems for vehicle theft detection and prevention using a smartphone and video-based parking technology
CN104301680A (zh) * 2014-10-22 2015-01-21 重庆宣努生物科技股份有限公司 云视频农业监控与检测方法
CN105898207B (zh) * 2015-01-26 2019-05-10 杭州海康威视数字技术股份有限公司 视频数据的智能处理方法及系统

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2007221581A (ja) * 2006-02-17 2007-08-30 Toshiba Corp 監視システム及び画像処理装置
CN101448145A (zh) * 2008-12-26 2009-06-03 北京中星微电子有限公司 Ip摄像机和视频监控系统及ip摄像机的信号处理方法
CN102194320A (zh) * 2011-04-25 2011-09-21 杭州海康威视数字技术股份有限公司 高清网络智能摄像机及高清网络智能抓拍方法
CN202634594U (zh) * 2011-12-09 2012-12-26 中兴智能交通(无锡)有限公司 3g网络摄像机
CN103379266A (zh) * 2013-07-05 2013-10-30 武汉烽火众智数字技术有限责任公司 一种具有视频语义分析功能的高清网络摄像机

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP3253042A4 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20180035071A (ko) * 2016-09-28 2018-04-05 한화테크윈 주식회사 데이터 분산 저장 방법 및 시스템
EP3522577A4 (en) * 2016-09-28 2019-08-07 Hanwha Techwin Co., Ltd. METHOD AND SYSTEM FOR STORING DATA DISTRIBUTION
US10379965B2 (en) 2016-09-28 2019-08-13 Hanwha Techwin Co., Ltd. Data distribution storing method and system thereof
KR102104417B1 (ko) * 2016-09-28 2020-04-24 한화테크윈 주식회사 데이터 분산 저장 방법 및 시스템
US11032584B2 (en) 2016-11-23 2021-06-08 Hangzhou Hikvision Digital Technology Co., Ltd. Picture storage method, apparatus and video monitoring system
CN113361332A (zh) * 2021-05-17 2021-09-07 北京中海前沿材料技术有限公司 视频数据的采集处理方法和装置

Also Published As

Publication number Publication date
US20180007429A1 (en) 2018-01-04
EP3253042A4 (en) 2018-07-18
EP3253042B1 (en) 2021-03-17
CN105898207A (zh) 2016-08-24
CN105898207B (zh) 2019-05-10
US10178430B2 (en) 2019-01-08
EP3253042A1 (en) 2017-12-06

Similar Documents

Publication Publication Date Title
WO2016119528A1 (zh) 视频数据的智能处理方法及系统
US11562020B2 (en) Short-term and long-term memory on an edge device
US20200195835A1 (en) Bandwidth efficient video surveillance system
RU2632473C1 (ru) Способ обмена данными между ip видеокамерой и сервером (варианты)
CN105574506B (zh) 基于深度学习和大规模集群的智能人脸追逃系统及方法
US20190332897A1 (en) Systems and methods for object detection
CN101631237B (zh) 一种视频监控数据存储管理系统
CN105164695A (zh) 用于探测视频数据中的高兴趣事件的系统和方法
US9436692B1 (en) Large scale video analytics architecture
EP3373549A1 (en) A subsumption architecture for processing fragments of a video stream
CN102819528A (zh) 生成视频摘要的方法和装置
KR20080075091A (ko) 실시간 경보 및 포렌식 분석을 위한 비디오 분석 데이터의저장
CN103347167A (zh) 一种基于分段的监控视频内容描述方法
US20160171283A1 (en) Data-Enhanced Video Viewing System and Methods for Computer Vision Processing
US20170034483A1 (en) Smart shift selection in a cloud video service
WO2015099675A1 (en) Smart view selection in a cloud video service
US20230412769A1 (en) Scalable Visual Computing System
CN103870574A (zh) 基于h.264密文云视频存储的标签制作及索引方法
CN110851473A (zh) 一种数据处理方法、装置和系统
Xu et al. Video analytics with zero-streaming cameras
CN102665064A (zh) 一种基于标准标记与快速检索的交通视频监控系统
WO2013107146A1 (zh) 一种基于视频识别技术提供增值服务的方法及系统
WO2016201992A1 (zh) 云存储服务器的视频存储及检索方法、视频云存储系统
Xu et al. Automated pedestrian safety analysis using data from traffic monitoring cameras
KR101475037B1 (ko) 분산 교통 데이터 관리 시스템

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 15879726

Country of ref document: EP

Kind code of ref document: A1

WWE Wipo information: entry into national phase

Ref document number: 15537462

Country of ref document: US

REEP Request for entry into the european phase

Ref document number: 2015879726

Country of ref document: EP

NENP Non-entry into the national phase

Ref country code: DE