WO2022194147A1 - 目标对象监控方法和监控设备 - Google Patents

目标对象监控方法和监控设备 Download PDF

Info

Publication number
WO2022194147A1
WO2022194147A1 PCT/CN2022/080927 CN2022080927W WO2022194147A1 WO 2022194147 A1 WO2022194147 A1 WO 2022194147A1 CN 2022080927 W CN2022080927 W CN 2022080927W WO 2022194147 A1 WO2022194147 A1 WO 2022194147A1
Authority
WO
WIPO (PCT)
Prior art keywords
video frame
frame
monitoring video
monitoring
target
Prior art date
Application number
PCT/CN2022/080927
Other languages
English (en)
French (fr)
Inventor
李源
Original Assignee
中科智云科技有限公司
成都点泽智能科技有限公司
上海点泽科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 中科智云科技有限公司, 成都点泽智能科技有限公司, 上海点泽科技有限公司 filed Critical 中科智云科技有限公司
Publication of WO2022194147A1 publication Critical patent/WO2022194147A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present application relates to the technical field of monitoring, and in particular, to a target object monitoring method and monitoring device
  • a target object monitoring method and monitoring device are provided.
  • a target object monitoring method comprising:
  • the target object corresponding to the object trajectory information belongs to the monitoring object, and judging whether the trajectory label information corresponding to the object trajectory information belongs to the first label information, wherein the first label information represents whether the at least one target object exists or not. target objects that are monitored objects;
  • the target object belongs to the monitoring object, and the track label information corresponding to the object track information does not belong to the first label information, a preset warning operation is performed on the target object.
  • the step of creating corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtaining at least one object trajectory information includes:
  • each target object belongs to the monitoring object, determine whether at least one piece of object trajectory information has been created based on the historical monitoring video frame, wherein the historical monitoring video frames belong to the surveillance video;
  • At least one piece of object track information is not created based on the historical monitoring video frame, then corresponding object track information is created for each of the target objects.
  • the step of creating corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtaining at least one object trajectory information further includes:
  • the step of creating corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtaining at least one object trajectory information further includes:
  • At least one piece of object track information is not created based on the historical monitoring video frame, create corresponding object track information for each of the target objects, and configure the track label information corresponding to each obtained object track information as the first object track information. a label information.
  • the step of creating corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtaining at least one object trajectory information further includes:
  • At least one piece of object trajectory information has been created based on the historical monitoring video frame, performing object matching processing on the at least one piece of object trajectory information and at least one target object;
  • the target object is added to the matched object trajectory information.
  • the step of acquiring target monitoring video frames includes:
  • the multi-frame monitoring video frames are screened to obtain at least one target monitoring video frame.
  • the step of screening the multi-frame monitoring video frames to obtain at least one target monitoring video frame includes:
  • an inter-frame difference value between the candidate monitoring video frames is calculated every two frames, and based on a preset inter-frame difference threshold and the inter-frame difference value
  • the candidate surveillance video frames are correlated to form a corresponding video frame correlation network
  • the step of screening the multi-frame monitoring video frames to obtain at least one target monitoring video frame includes:
  • the preset time correction unit length and the preset time correction maximum length determine the start correction times of multiple frames corresponding to the candidate sampling monitoring video frame, and according to The frame end time of the candidate sampling monitoring video frame, the preset time correction unit length and the preset time correction maximum length, determine the multiple frame end correction times corresponding to the candidate sampling monitoring video frame;
  • Target frame start correction times from the multiple frame start correction times of the candidate sampled monitoring video frames, and select each target frame from the multiple frame end correction times of the candidate sampled monitoring video frames.
  • the target frame end correction time corresponding to the frame start correction time is obtained, and multiple target frame correction time groups are obtained;
  • the multi-frame monitoring video frames determine a monitoring video frame set corresponding to each of the target frame correction time groups to obtain a plurality of monitoring video frame sets;
  • inter-frame differential processing is performed on the monitoring video frames included in the monitoring video frame set to obtain a corresponding differential processing result, and based on the differential processing result corresponding to each of the monitoring video frame sets, selecting a target monitoring video frame set from the plurality of monitoring video frame sets;
  • the monitoring video frame in the target monitoring video frame set corresponding to the candidate sampling monitoring video frame of each frame is taken as the target monitoring video frame.
  • the step of judging whether the target object corresponding to the object trajectory information belongs to the monitoring object, and judging whether the trajectory label information corresponding to the object trajectory information belongs to the first label information include:
  • the embodiment of the present application also provides a monitoring device, and the monitoring device includes:
  • the processor connected with the memory is used for executing the computer program stored in the memory, so as to realize the above-mentioned target object monitoring method.
  • the target object monitoring method and monitoring device provided by the present application on the basis of judging whether the target object belongs to the monitoring object, also judges whether the trajectory label information corresponding to the object trajectory information of the target object belongs to the first label information, so that the target object needs to be Only when the object belongs to the monitoring object and the track label information does not belong to the first label information, the warning operation is performed on the target object. Based on this, since the content represented by the first tag information is that there is a target object that does not belong to the monitoring object in at least one target object in the monitoring video, that is, the monitoring object is only warned when the monitoring object exists alone.
  • FIG. 1 is a structural block diagram of a monitoring device provided by an embodiment of the present application.
  • FIG. 2 is a schematic flowchart of a target object monitoring method provided by an embodiment of the present application.
  • FIG. 3 is a schematic diagram of a sub-flow of step S110 of the target object monitoring method of FIG. 2 .
  • FIG. 4 is a schematic diagram of another sub-flow of step S110 of the target object monitoring method of FIG. 2 .
  • FIG. 5 is a schematic diagram of a sub-flow of step S1101 of the target object monitoring method of FIG. 3 .
  • FIG. 6 is a schematic diagram of a sub-flow of step S1101B of the target object monitoring method of FIG. 5 .
  • FIG. 7 is a schematic diagram of another sub-flow of step S1101B of the target object monitoring method of FIG. 5 .
  • FIG. 8 is a schematic diagram of another sub-flow of step S120 of the target object monitoring method of FIG. 2 .
  • FIG. 9 is a structural block diagram of a target object monitoring apparatus provided by an embodiment of the present application.
  • Icons 10-monitoring equipment; 12-memory; 14-processor; 100-target object monitoring device; 110-track information creation module; 120-object information judgment module; 130-warning operation execution module.
  • an embodiment of the present application provides a monitoring device 10 , and the monitoring device 10 may include a memory 12 , a processor 14 and a target object monitoring apparatus 100 .
  • the memory 12 and the processor 14 are directly or indirectly electrically connected to realize data transmission or interaction.
  • the memory 12 and the processor 14 may be electrically connected to each other through one or more communication buses or signal lines.
  • the target object monitoring device 100 includes at least one software function module that can be stored in the memory 12 in the form of software or firmware.
  • the processor 14 is configured to execute executable computer programs stored in the memory 12, for example, software function modules and computer programs included in the target object monitoring apparatus 100, so as to achieve the goals provided by the embodiments of the present application. Object monitoring methods.
  • the memory 12 may be, but not limited to, a random access memory (Random Access Memory, RAM), a read-only memory (Read Only Memory, ROM), a programmable read-only memory (Programmable Read-Only Memory, PROM), Erasable Programmable Read-Only Memory (EPROM), Electrically Erasable Programmable Read-Only Memory (Electric Erasable Programmable Read-Only Memory, EEPROM), etc.
  • RAM Random Access Memory
  • ROM read-only memory
  • PROM programmable read-only memory
  • EPROM Erasable Programmable Read-Only Memory
  • EEPROM Electrically Erasable Programmable Read-Only Memory
  • the processor 14 may be a general-purpose processor, including a central processing unit (Central Processing Unit, CPU), a network processor (Network Processor, NP), a system on a chip (System on Chip, SoC), etc.; also It is a digital signal processor (DSP), an application specific integrated circuit (ASIC), a field programmable gate array (FPGA) or other programmable logic device, a discrete gate or transistor logic device, a discrete hardware component.
  • CPU Central Processing Unit
  • NP Network Processor
  • SoC System on Chip
  • DSP digital signal processor
  • ASIC application specific integrated circuit
  • FPGA field programmable gate array
  • the structure shown in FIG. 1 is only for illustration, and the monitoring device 10 may further include more or less components than those shown in FIG. 1 , or have different configurations from those shown in FIG. 1 , for example, the monitoring device 10 may also include a communication unit for information interaction with other devices (eg, other terminal devices).
  • the monitoring device 10 may also include a communication unit for information interaction with other devices (eg, other terminal devices).
  • the monitoring device 10 may be either a background server connected to an image capture device for acquiring monitoring video through the image capture device, or an image capture device with data processing capability, so that when monitoring video is collected, Process the surveillance video.
  • an embodiment of the present application further provides a target object monitoring method applicable to the above monitoring device 10 .
  • the method steps of the target object monitoring method may be implemented by the monitoring device 10 .
  • the target object monitoring method may include the following steps S110 to S130.
  • Step S110 Create corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtain at least one piece of object trajectory information.
  • the monitoring device 10 may create corresponding object trajectory information based on at least one target object in the obtained monitoring video. In this way, for at least one target object, at least one piece of object trajectory information can be obtained.
  • Step S120 judging whether the target object corresponding to the object trajectory information belongs to the monitoring object, and judging whether the trajectory label information corresponding to the object trajectory information belongs to the first label information.
  • the monitoring device 10 may determine whether the target object corresponding to the object track information belongs to the monitoring object, and determine the track label corresponding to the object track information Whether the information belongs to the first tag information.
  • the first label information may represent that there is a target object that does not belong to the monitoring object in the at least one target object.
  • Step S130 if it is determined that the target object belongs to the monitoring object, and the track label information corresponding to the object track information does not belong to the first label information, perform a preset warning operation on the target object.
  • the monitoring device 10 may The target object performs a preset warning operation.
  • the target object on the basis of judging whether the target object belongs to the monitoring object, it is also judged whether the trajectory label information corresponding to the object trajectory information of the target object belongs to the first label information, so that the target object belongs to the monitoring object and the trajectory label information Only when it does not belong to the first label information, the alert operation is performed on the target object. Based on this, since the content represented by the first tag information is that there is a target object that does not belong to the monitoring object in at least one target object in the monitoring video, that is, the monitoring object is only warned when the monitoring object exists alone.
  • step S110 the specific manner of creating object trajectory information based on the acquired surveillance video is not limited, and can be selected according to actual application requirements.
  • step S110 may include the following steps:
  • Step S1101 obtaining a target surveillance video frame, wherein the target surveillance video frame belongs to surveillance video;
  • Step S1102 judging whether there is at least one target object in the target monitoring video frame (for example, humanoid detection can be performed on the target monitoring video frame to determine whether there is at least one target object, that is, determine whether there is at least one pedestrian, wherein , the method of humanoid detection can include, but is not limited to, PPYOLO algorithm, etc.);
  • Step S1103 if there is at least one target object in the target monitoring video frame, then when each of the target objects belongs to the monitoring object, determine whether at least one object track information has been created based on the historical monitoring video frame, wherein, The historical surveillance video frame belongs to the surveillance video;
  • Step S1104 if at least one object trajectory information is not created based on the historical monitoring video frame, then create corresponding object trajectory information for each of the target objects (for example, the target monitoring video frame may be the first frame monitoring video frame). , that is, there is no historical monitoring video; or, there is a historical monitoring video frame, but there is no target object in the historical monitoring video frame; in this way, each target object in the target monitoring video frame can be created respectively.
  • a corresponding object Trajectory information; wherein, the specific method of creating the object trajectory information may be obtained based on the humanoid detection frame obtained by the aforementioned humanoid detection).
  • the specific manner of acquiring the target surveillance video frame based on step 1 is not limited, and may be selected according to actual application requirements.
  • each frame of monitoring video frame obtained by shooting the target monitoring scene may be used as the target monitoring video frame, thereby effectively ensuring the reliability of monitoring.
  • the above-mentioned target object monitoring method can be applied to an image acquisition device, that is, the monitoring device 10 is an image acquisition device, then as As shown in Figure 5, the target surveillance video frame can be obtained based on the following steps:
  • S1101A acquiring continuous multi-frame monitoring video frames formed by the shooting target monitoring scene; S1101B, screening the multi-frame monitoring video frames to obtain at least one target monitoring video frame.
  • some of the surveillance video frames in the captured multiple surveillance video frames may be used as target video frames for subsequent processing, such as humanoid detection.
  • the embodiment of the present application provides the following three options: Alternative example to filter surveillance video frames.
  • the monitoring video frames can be screened based on the following steps to obtain at least one target monitoring video frame:
  • the multi-frame candidate monitoring video frames (it can be understood that the first monitoring video frame is The frame monitoring video frame may refer to the monitoring video frame with the earliest timing among the multi-frame monitoring video frames, such as the monitoring video frame with the earliest shooting time; the last monitoring video frame may refer to the monitoring video frame of the multi-frame monitoring video frame.
  • the monitoring video frame with the latest time sequence such as the monitoring video frame with the latest shooting time);
  • the inter-frame difference value in the candidate monitoring video frames of the multiple frames, calculate the inter-frame difference value between the candidate monitoring video frames every two frames (for example, the pixel at the corresponding position of the two candidate monitoring video frames may be divided based on the inter-frame difference method Then, the absolute value of the pixel difference is summed to obtain the inter-frame difference value between the two candidate monitoring video frames), and based on the preset inter-frame difference threshold and the frame
  • the inter-frame difference value performs correlation processing on the multiple-frame candidate monitoring video frames (for example, it can be determined whether the inter-frame difference value between two candidate monitoring video frames is greater than the inter-frame difference threshold, and the inter-frame difference value is greater than the frame.
  • the two candidate surveillance video frames are correlated and processed, wherein the difference threshold between frames can be generated based on the configuration operation performed by the user according to the actual application scenario, and in applications with low data processing requirements,
  • the inter-frame difference threshold can be larger, so that the formed video frame association network can be made smaller), and a corresponding video frame association network is formed (based on this, two candidate monitoring video frames that are correlated with each other in the video frame association network are formed.
  • the inter-frame difference value between them is greater than the inter-frame difference threshold);
  • S1101B3 Calculate the inter-frame difference value between the first target monitoring video frame and the candidate monitoring video frame for each frame, and the difference between the second target monitoring video frame and the candidate monitoring video frame for each frame.
  • the difference value between frames and based on the difference value between frames, determine the first candidate monitoring video frame with the greatest degree of correlation with the first target monitoring video frame, and the first candidate monitoring video frame with the largest degree of correlation with the second target monitoring video frame.
  • candidate surveillance video frame with the largest correlation with the first target surveillance video frame may refer to the candidate surveillance video frame with the largest inter-frame relationship with the first target surveillance video frame
  • the candidate monitoring video frame of the difference value; the candidate monitoring video frame with the maximum degree of correlation with the second target monitoring video frame may refer to the one with the largest inter-frame difference value with the second target monitoring video frame. candidate surveillance video frame);
  • S1101B4 Acquire a video frame link sub-network in the video frame association network connecting the first candidate monitoring video frame and the second candidate monitoring video frame (for example, in the video frame association network, if the The first candidate surveillance video frame is associated with a candidate surveillance video frame A and a candidate surveillance video frame B, the candidate surveillance video frame A is associated with a candidate surveillance video frame C, and the candidate surveillance video frame B and the candidate surveillance video frame C are respectively associated with the candidate surveillance video frame C.
  • the second candidate monitoring video frame is associated, so that a video frame link sub-network including the candidate monitoring video frame A, the candidate monitoring video frame B and the candidate monitoring video frame C) can be formed), wherein the video frame chain
  • the road sub-network is used to characterize the association relationship between the first candidate monitoring video frame and the second candidate monitoring video frame;
  • S1101B5 Determine the first candidate monitoring video frame according to the degree of association of the first candidate monitoring video frame and the second candidate monitoring video frame with respect to the video frame sub-link set corresponding to the video frame link sub-network
  • the target relevance of the frame and the second candidate surveillance video frame relative to the video frame link sub-network e.g., based on the foregoing example, the difference between the first candidate surveillance video frame and the second candidate surveillance video frame
  • Two video frame sub-links can be formed between them, namely “the first candidate surveillance video frame, candidate surveillance video frame A, candidate surveillance video frame C, and second candidate surveillance video frame” and “first candidate surveillance video frame, candidate surveillance video frame Monitoring video frame B, second candidate monitoring video frame”, secondly, calculate the degree of association between the first candidate monitoring video frame and the second candidate monitoring video frame with respect to each video frame sub-link, such as for "No.
  • a candidate surveillance video frame, candidate surveillance video frame B, and a second candidate surveillance video frame” is a video frame sub-link
  • the correlation may be the inter-frame difference between the first candidate surveillance video frame and the candidate surveillance video frame B value and the sum of the inter-frame difference values between the second candidate monitoring video frame and the candidate monitoring video frame B, then, calculate the weighted sum of the correlation degree of each video frame sub-link, and use the weighted sum as the target correlation degree, wherein the weight coefficient of the correlation degree of each video frame sub-link may have a negative correlation with the number of candidate monitoring video frames included in the video frame sub-link), wherein the video frame sub-link
  • the link set includes all video frame sub-links that satisfy the preset association degree constraints (for example, in order to reduce the data processing amount, the association degree constraints may be that the number of candidate monitoring video frames included in the video frame sub-links is less than the predetermined number of video frame sub-links). set value, and when a small amount of data processing is required, the preset value can be smaller
  • the target correlation degree when the target correlation degree is greater than a preset correlation degree threshold, obtain the correlation degree between the second candidate monitoring video frame and each connected candidate monitoring video frame based on the video frame correlation network
  • the formed correlation degree value range (that is, after the target correlation degree is determined based on the foregoing steps, it can be determined whether the target correlation degree is greater than the correlation degree threshold, and when the target correlation degree is greater than the correlation degree threshold value
  • S1101B7 Screen candidate video frames on each video frame sub-link in the video frame sub-link set based on the value range of the correlation degree to obtain at least one third candidate monitoring video frame (for example, for ""
  • the first candidate monitoring video frame, the candidate monitoring video frame B, the second candidate monitoring video frame” is a video frame sub-link, if the correlation degree between the candidate monitoring video frame B and the first candidate monitoring video frame belongs to the correlation degree value range, and the correlation degree between the candidate monitoring video frame B and the second candidate monitoring video frame belongs to the value range of the correlation degree, then the candidate monitoring video frame B is taken as the third candidate monitoring video frame; that is, for the video frame A candidate video frame on a sub-link, if the correlation between the candidate video frame and the two candidate video frames associated with the sub-link of the video frame belongs to the value range of the correlation degree, the candidate video frame can be used as the third candidate surveillance video frame);
  • S1101B8 Use the first target monitoring video frame, the second target monitoring video frame, the first candidate monitoring video frame, the second candidate monitoring video frame, and the third candidate monitoring video frame as Target surveillance video frame.
  • the monitoring video frames may be screened based on the following steps to obtain at least one target monitoring video frame:
  • the first step is to calculate the inter-frame difference value between every two frames of monitoring video frames in the multi-frame monitoring video, and determine the difference between the multi-frame monitoring video frame and other monitoring video frames based on the inter-frame difference value.
  • the first surveillance video frame with the greatest degree of correlation, and the second surveillance video frame with the greatest degree of correlation with the first surveillance video frame can be calculated first.
  • the sum of the inter-frame difference values between the monitoring video frame and other monitoring video frames can be calculated, so that for multiple monitoring video frames, multiple sum values can be obtained; then , the maximum value among the multiple sum values can be determined, and the monitoring video frame corresponding to the maximum value is used as the first monitoring video frame, and then, the monitoring video frame with the largest inter-frame difference value between the first monitoring video frame and the first monitoring video frame is used. frame, as the second monitoring video frame);
  • the multi-frame candidate monitoring video frames are correlated based on the preset inter-frame difference threshold and the inter-frame difference value to form a corresponding video frame association network (for example, every two frames of monitoring video frames can be Compare the inter-frame difference value between the frames with the inter-frame difference threshold to determine each inter-frame difference value greater than the inter-frame difference threshold, and then associate the two monitoring video frames corresponding to the inter-frame difference value; , so that in the formed video frame association network, the connected two frames of monitoring video frames are processed by this association);
  • the third step is to acquire, according to the video frame association network, a monitoring video frame that has an associated relationship with the first monitoring video frame, and obtain a first associated monitoring video frame set;
  • the fourth step obtain the monitoring video frame that has an associated relationship with the second monitoring video frame, and obtain a second associated monitoring video frame set;
  • the fifth step determining the union of the first associated monitoring video frame set and the second associated monitoring video frame set, and using the union as a candidate monitoring video frame set;
  • the sixth step is to separately count the video frame association links between the candidate monitoring video frame and the first monitoring video frame in the video frame association network for each frame in the candidate monitoring video frame set, and obtain each frame.
  • the first link correlation degree characteristic value of the candidate monitoring video frame wherein the first link correlation degree characteristic value is weighted based on the link correlation degree of each video frame correlation link corresponding to the candidate monitoring video frame
  • Obtain for example, for the candidate surveillance video frame 1 in the candidate surveillance video frame set, the candidate surveillance video frame 1 is associated with the candidate surveillance video frame 2, and the candidate surveillance video frame 2 is associated with the first surveillance video frame, so, A video association link can be formed; and, the candidate monitoring video frame 1 is also associated with a candidate monitoring video frame 3, and the candidate monitoring video frame 3 is associated with the first monitoring video frame, so a video association chain can also be formed.
  • the link correlation degree of the two video-related links can be calculated separately, and then the two link correlation degrees can be weighted; wherein, the link correlation degree of a video-related link can be, The average value of the inter-frame difference between every two candidate monitoring video frames on the video associated link), and the weight coefficient of the link association degree of each video frame associated link is based on the Link length determination (eg, there may be a negative correlation between the weight coefficient and the link length);
  • the seventh step is to count the video frame association links between the candidate monitoring video frame and the second monitoring video frame in the video frame association network for each frame in the candidate monitoring video frame set respectively, and obtain each frame.
  • the second link correlation degree characteristic value of the candidate monitoring video frame wherein the second link correlation degree characteristic value is weighted based on the link correlation degree of each video frame correlation link corresponding to the candidate monitoring video frame obtained, and the link correlation degree of each video frame associated link is determined based on the link length of each video frame associated link (as in the previous steps, and will not be repeated here);
  • Step 8 Calculate the link correlation degree of each frame of the candidate monitoring video frame in the candidate monitoring video frame set according to the first link correlation degree characteristic value and the second link correlation degree characteristic value respectively.
  • Characterization value for example, for a candidate monitoring video frame in the candidate monitoring video frame set, the relationship between the first link correlation degree characteristic value corresponding to the candidate monitoring video frame and the corresponding second link characteristic value can be calculated. average value, and use the average value as the link correlation characterization value of the candidate monitoring video frame);
  • the ninth step is to screen each candidate monitoring video frame in the candidate monitoring video frame set based on the link correlation degree characterization value to obtain at least one third monitoring video frame (for example, the link can be One or more candidate monitoring video frames with the largest link correlation degree characteristic value as the third monitoring video frame; or, the candidate monitoring video frame with the link correlation degree characteristic value greater than the preset characteristic value can be used as the third monitoring video frame frame);
  • the link can be One or more candidate monitoring video frames with the largest link correlation degree characteristic value as the third monitoring video frame; or, the candidate monitoring video frame with the link correlation degree characteristic value greater than the preset characteristic value can be used as the third monitoring video frame frame
  • the first monitoring video frame, the second monitoring video frame and the at least one third monitoring video frame are respectively used as target monitoring video frames.
  • an inter-frame difference value between two monitoring video frames may be used as the degree of correlation between the two monitoring video frames.
  • the monitoring video frames can be screened based on the following steps to obtain at least one target monitoring video frame:
  • sampling the multi-frame monitoring video frames to obtain multi-frame sampling monitoring video frames (for example, the multi-frame monitoring video frames can be sampled at equal intervals);
  • the frame length information includes the frame start time of the candidate sampled monitoring video frame and the frame end time of the candidate sampled monitoring video frame (for example, for one frame of the candidate sampled monitoring video frame, the frame start time of the candidate sampled monitoring video frame.
  • the time can be 9:15:0.1, and the frame end time can be 9:15:0.15, so the frame length of the candidate sampling monitoring video frame can be 0.05s);
  • the preset time correction unit length is less than the preset time correction maximum length, and the preset time correction maximum length is greater than the preset time correction maximum length
  • the frame length of the monitoring video frame (wherein, the higher the accuracy requirement for screening video frames, the smaller the preset time correction unit length can be, and the larger the preset time correction maximum length; on the contrary, the efficiency of video frame screening
  • the larger the preset time correction unit length may be, and the smaller the preset time correction maximum length may be; wherein, the preset time correction unit length and the preset time correction unit length
  • the specific value of the maximum length of time correction can be generated based on the configuration operation performed by the user according to the actual application scenario.
  • the frame length of the monitoring video frame is 0.05S
  • the corresponding preset time correction unit length can be 0.03S
  • the obtained frame start correction time can include 9:15:0.07, 9:15:0.04, 9:15:0.01, 9:15:0.13 etc.
  • the preset time correction unit length and the preset time correction maximum length determine a plurality of frame end corrections corresponding to the candidate sampling monitoring video frame Time (for example, for the frame start time "9:15:0.15", the resulting frame start correction time can include 9:15:0.18, 9:15:0.21, 9:15:0.24, 9:15:0.12 seconds, etc.);
  • selecting a plurality of target frame start correction times from the multiple frame start correction times of the candidate sampling monitoring video frames for example, a part of the frame start correction times may be randomly selected as the target frame start correction times, or the All frame start correction time as target frame start correction time
  • select the target frame end correction time corresponding to each described target frame start correction time For example, for each target frame start correction time, one frame end correction time may be selected from the plurality of frame end correction times as the target frame end correction time corresponding to the target frame start correction time, wherein the target frame end correction time The difference between the correction time and the start correction time of the target frame is greater than or equal to the frame length of the monitoring video frame) to obtain a plurality of target frame correction time groups;
  • S1101B6' in the multi-frame monitoring video frames, determine a monitoring video frame set corresponding to each target frame correction time group, and obtain a plurality of monitoring video frame sets (that is, for each target frame correction Time group, each frame monitoring video frame that has intersection between the frame length information in the described multi-frame monitoring video frame and the target frame correction time group, as a part of the monitoring video frame set corresponding to the target frame correction time group; In this way, for multiple target frame correction time groups, multiple monitoring video frame sets can be obtained);
  • a target monitoring video frame set is selected from the plurality of monitoring video frame sets (for example, for a monitoring video frame set, the inter-frame difference between every two monitoring video frames in the monitoring video frame set can be calculated.
  • the threshold value can be the mean value of the multiple average values
  • step S110 may further include other different steps based on different requirements.
  • step S110 in order to improve the accuracy of the warning operation, after performing the above step 2, if there is no target object in the target monitoring video frame, as shown in FIG. 3, step S110 The following steps can also be included:
  • S1105 determine whether at least one piece of object trajectory information has been created based on the historical monitoring video frame; S1106, if at least one object trajectory information has been created based on the historical monitoring video frame, then for each piece of the object trajectory information corresponding to the trajectory lost frame number Perform an update, wherein the number of frames lost in the track is used to determine whether to execute the warning operation (the specific function of the number of frames lost in the track may be described later).
  • the target surveillance video frame if there is no pedestrian in the target surveillance video frame, it may be first determined whether at least one piece of object track information has been created. Then, when at least one piece of object track information has been created, the number of track loss frames corresponding to each piece of object track information can be updated, such as adding 1, so that it can indicate that the currently monitored pedestrian is not in the target monitoring scene, That is, it is confirmed that the pedestrian is lost at the current moment.
  • step S110 in order to avoid the problem of resource waste caused by unnecessary warning operations, in an alternative example, after performing the above step 2, if all If there is at least one target object in the target monitoring video frame, and there is a target object that does not belong to the monitoring object in the at least one target object, then as shown in FIG. 4 , step S110 may further include the following steps:
  • S1107 determine whether at least one piece of object track information has been created based on the historical monitoring video frame; S1108, if at least one object track information has been created based on the historical monitoring video frame, configure the track label information corresponding to each piece of the object track information is the first label information; S1109, if at least one object track information is not created based on the historical monitoring video frame, then create corresponding object track information for each of the target objects, and each obtained object track information The corresponding track label information is configured as the first label information.
  • the trajectory label information corresponding to the at least one piece of object trajectory information can be configured as the first label information, which means that there are adults among the pedestrians. No warning operation is required.
  • configuring the track label information as the first label information may mean maintaining the first label information when the track label information already belongs to the first label information, and maintaining the first label information when the track label information does not belong to the first label information. When there is a tag information, it is changed to the first tag information.
  • step S110 may further include the following steps:
  • S11010 perform object matching processing on the at least one object track information and at least one target object
  • S11011 if there is object track information that does not match each target object in the at least one target object, perform the object track information on the object track information
  • the corresponding track loss frame number is updated, wherein the track loss frame number is used to judge whether to execute the warning operation
  • S11012 if there is a target object that does not match each object track information in the at least one object track information, Corresponding object trajectory information is created based on the target object; if there is a target object matching one piece of object trajectory information in the at least one piece of object trajectory information, the target object is added to the matched object trajectory information.
  • the two pieces of object track information are matched with pedestrians in the target surveillance video frame. Then, if there is a pedestrian in the target surveillance video frame, there is object trajectory information that does not match the pedestrian, indicating that the behavior of the object trajectory information is lost in the target video frame, so the trajectory corresponding to the object trajectory information can be lost.
  • the frame number is updated, such as adding 1. Or, if there are 3 pedestrians in the target surveillance video frame, there is a pedestrian that does not match the object trajectory information, indicating that the pedestrian appears for the first time. Therefore, the corresponding object trajectory information can be created for the pedestrian.
  • the pedestrian can be added to the object trajectory information.
  • the pedestrian detection method is humanoid detection
  • the detected humanoid detection frame can be added to the object trajectory information.
  • a piece of object track information may include multiple humanoid detection frames with a sequential relationship.
  • step S120 the specific manner of determining whether the target object belongs to the monitoring object and whether the track label information belongs to the first label information is not limited, and can be selected according to actual application requirements.
  • step S120 may include the following steps:
  • S1201 acquiring the number of lost track frames corresponding to each piece of the object track information; S1202, judging whether the number of lost frames of each track is greater than a preset threshold of the number of frames; S1203, if there is a number of lost tracks greater than the threshold of the number of frames number of frames, then it is determined whether the target object corresponding to the object track information corresponding to the track lost frame number belongs to the monitoring object, and whether the track label information corresponding to the object track information belongs to the first label information.
  • the number of track loss frames corresponding to the object track information corresponding to each pedestrian may be obtained first (as described above, for example, if the obtained first monitoring video frame is a pedestrian A creates object trajectory information, and there is no pedestrian A in the next 3 surveillance video frames, then the number of missing frames of the trajectory corresponding to pedestrian A is 3). (If there is a specific pedestrian in each acquired surveillance video frame, the number of frames lost in the trajectory of the specific pedestrian is 0). Secondly, it can be determined whether the number of lost frames of each track is greater than the preset number of frames threshold.
  • the corresponding pedestrian belongs to a child, and whether the corresponding track label information belongs to the first label information, that is, to determine whether the corresponding pedestrian belongs to a child and is related to the pedestrian. Whether other pedestrians with them are children. In this way, if a pedestrian belongs to a child, and there are no other pedestrians acting together or other pedestrians acting together also belong to children, it can be determined that a preset warning operation needs to be performed for the pedestrian.
  • the specific manner of judging whether the target object belongs to the monitoring object is not limited, and can be selected according to actual application requirements.
  • the height information of the target object may be calculated first, for example, based on the humanoid detection method in the humanoid detection method.
  • the height information of the frame is used to determine the height information of the target object, and then the height information is compared with the height threshold information of the child, so as to determine whether the target object belongs to the child.
  • step S130 that the specific manner of performing the warning operation is not limited, and can be selected according to actual application requirements.
  • warning information may be output to the terminal device of the monitoring personnel.
  • alert information can be output to the guardian's terminal device.
  • step S120 is executed to determine that the target object does not belong to the monitoring object and/or the track label information belongs to the first label information, it is possible to choose not to perform the warning operation.
  • the object track information may also be deleted.
  • step S130 that is, after the warning operation is performed, in order to save storage resources, etc.
  • the object track information may also be deleted.
  • an embodiment of the present application further provides a target object monitoring apparatus 100 that can be applied to the above monitoring device 10 .
  • the target object monitoring apparatus 100 may include a trajectory information creation module 110 , an object information determination module 120 and an alert operation execution module 130 .
  • the trajectory information creation module 110 is configured to create corresponding object trajectory information based on at least one target object in the acquired surveillance video, and obtain at least one piece of object trajectory information.
  • the trajectory information creation module 110 may be configured to execute step S110 shown in FIG. 2 , and reference may be made to the foregoing description of step S110 for relevant content executable by the trajectory information creation module 110 .
  • the object information judgment module 120 is configured to judge whether the target object corresponding to the object trajectory information belongs to the monitoring object, and to judge whether the trajectory label information corresponding to the object trajectory information belongs to the first label information, wherein the first label information Indicates that there is a target object that does not belong to the monitoring object in the at least one target object.
  • the object information determination module 120 may be configured to execute step S120 shown in FIG. 2 , and the foregoing description of step S120 may be referred to for relevant content executable by the object information determination module 120 .
  • the warning operation execution module 130 is configured to execute the preset operation for the target object if the target object belongs to the monitoring object and the trajectory label information corresponding to the object trajectory information does not belong to the first label information. Alert action.
  • the warning operation execution module 130 may be configured to execute the step S130 shown in FIG. 2 .
  • the relevant content executable by the warning operation execution module 130 reference may be made to the foregoing description of the step S130 .
  • a computer-readable storage medium is further provided, and a computer program is stored in the computer-readable storage medium, and the computer program executes the target object monitoring method when running. each step.
  • the target object monitoring method and monitoring device provided by this application, on the basis of judging whether the target object belongs to the monitoring object, also judges whether the trajectory label information corresponding to the object trajectory information of the target object belongs to the first label information. , so that the warning operation is performed on the target object only when the target object belongs to the monitoring object and the track label information does not belong to the first label information. Based on this, since the content represented by the first tag information is that there is a target object that does not belong to the monitoring object in at least one target object in the monitoring video, that is, the monitoring object is only warned when the monitoring object exists alone.
  • each block in the flowchart or block diagrams may represent a module, segment, or portion of code that contains one or more functions for implementing the specified logical function(s) executable instructions. It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures.
  • each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
  • each functional module in each embodiment of the present application may be integrated together to form an independent part, or each module may exist independently, or two or more modules may be integrated to form an independent part.
  • the functions are implemented in the form of software function modules and sold or used as independent products, they can be stored in a computer-readable storage medium.
  • the technical solution of the present application can be embodied in the form of a software product in essence, or the part that contributes to the prior art or the part of the technical solution.
  • the computer software product is stored in a storage medium, including Several instructions are used to cause a computer device (which may be a personal computer, an electronic device, or a network device, etc.) to execute all or part of the steps of the methods described in the various embodiments of the present application.
  • the aforementioned storage medium includes: U disk, mobile hard disk, Read-Only Memory (ROM, Read-Only Memory), Random Access Memory (RAM, Random Access Memory), magnetic disk or optical disk and other media that can store program codes .
  • ROM Read-Only Memory
  • RAM Random Access Memory
  • magnetic disk or optical disk and other media that can store program codes .
  • the terms “comprising”, “comprising” or any other variation thereof are intended to encompass non-exclusive inclusion, such that a process, method, article or device comprising a series of elements includes not only those elements, It also includes other elements not expressly listed or inherent to such a process, method, article or apparatus. Without further limitation, an element qualified by the phrase "comprising a" does not preclude the presence of additional identical elements in a process, method, article or apparatus that includes the element.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

本申请提供目标对象监控方法和监控设备。在本申请中,首先,基于监控视频中的目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息。其次,判断对象轨迹信息对应的目标对象是否属于监控对象,并判断对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,其中,第一标签信息表征至少一个目标对象中存在不属于监控对象的目标对象。然后,若目标对象属于监控对象,且对象轨迹信息对应的轨迹标签信息不属于第一标签信息,则对目标对象执行预设的警示操作。

Description

目标对象监控方法和监控设备
相关申请的交叉引用
本申请要求于2021年03月15日提交中国专利局、申请号为202110274089.0的中国专利申请的优先权,所述专利申请的全部内容通过引用而并入本申请中。
技术领域
本申请涉及监控技术领域,具体而言,涉及一种目标对象监控方法和监控设备
背景技术
在监控技术领域中,存在对特定监控对象进行监控的应用场景,如对儿童、老人以及罪犯等。但是,在现有的监控技术中,在对特定监控对象进行监控的过程中,存在着监控效果较差的问题。
发明内容
根据本申请的各种实施例,提供一种目标对象监控方法和监控设备。
一种目标对象监控方法,包括:
基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息;
判断所述对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,其中,该第一标签信息表征所述至少一个目标对象中存在不属于监控对象的目标对象;以及
若所述目标对象属于所述监控对象,且所述对象轨迹信息对应的轨迹标签信息不属于所述第一标签信息,则对该目标对象执行预设的警示操作。
在其中一个实施例中,在上述目标对象监控方法中,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,包括:
获取目标监控视频帧,其中,该目标监控视频帧属于监控视频;
判断所述目标监控视频帧中是否存在至少一个目标对象;
若所述目标监控视频帧中存在至少一个目标对象,则在每一个所述目标对象属于所述监控对象时,判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息,其中,该历史监控视频帧属于所述监控视频;以及
若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息。
在其中一个实施例中,在上述目标对象监控方法中,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
若所述目标监控视频帧中不存在目标对象,则判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;以及
若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一条所述对象轨迹信息对应的轨迹丢失帧数进行更新,其中,该轨迹丢失帧数用于判断是否执行所述警示操作。
在其中一个实施例中,在上述目标对象监控方法中,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
若所述目标监控视频帧中存在至少一个目标对象,且该至少一个目标对象中存在不属于所述监控对象的目标对象,则判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;
若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则将每一条所述对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息;以及
若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息,并将得到的每一条对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息。
在其中一个实施例中,在上述目标对象监控方法中,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则将该至少一条对象轨迹信息与至少一个目标对象进行对象匹配处理;
若存在与所述至少一个目标对象中的每一个目标对象不匹配的对象轨迹信息,则对该对象轨迹信息对应的轨迹丢失帧数进行更新,其中,该轨迹丢失帧数用于判断是否执行所述警示操作;
若存在与所述至少一条对象轨迹信息中每一条对象轨迹信息不匹配的目标对象,则基于该目标对象创建对应的对象轨迹信息;以及
若存在与所述至少一条对象轨迹信息中的一条对象轨迹信息匹配的目标对象,则将该目标对象添加至匹配的对象轨迹信息中。
在其中一个实施例中,在上述目标对象监控方法中,所述获取目标监控视频帧的步骤,包括:
获取拍摄目标监控场景形成的连续多帧监控视频帧;以及
对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧。
在其中一个实施例中,在上述目标对象监控方法中,所述对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧的步骤,包括:
将所述多帧监控视频帧中的第一帧监控视频帧作为第一目标监控视频帧,将所述多帧监控视频帧中的最后一帧监控视频帧作为第二目标监控视频帧,并将所述多帧监控视频帧中第一帧监控视频帧和最后一帧监控视频帧以外的其它监控视频帧作为候选监控视频帧,得到多帧候选监控视频帧;
在所述多帧候选监控视频帧中,计算每两帧所述候选监控视频帧之间的帧间差分值,并基于预设的帧间差分阈值和所述帧间差分值对所述多帧候选监控视频帧进行关联处理,形成对应的视频帧关联网络;
分别计算所述第一目标监控视频帧与每一帧所述候选监控视频帧之间的帧间差分值、所述第二目标监控视频帧与每一帧所述候选监控视频帧之间帧间差分值,并基于所述帧间差分值确定与所述第一目标监控视频帧具有最大关联度的第一候选监控视频帧、与所述第二目标监控视频帧具有最大关联度的第二候选监控视频帧;
获取所述视频帧关联网络中连接所述第一候选监控视频帧和所述第二候选监控视频帧的视频帧链路子网络,其中,该视频帧链路子网络用于表征该第一候选监控视频帧和该第二候选监控视频帧之间的关联关系;
根据所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络对应的视频帧子链路集合的关联度,确定所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络的目标关联度,其中,所述视频帧子链路集合包括所有满足预设的关联度约束条件的视频帧子链路;
在所述目标关联度大于预设的关联度阈值时,基于所述视频帧关联网络,获取基于所述第二候选监控视频帧与连接的每一帧候选监控视频帧之间的关联度形成的关联度取值范围;
基于所述关联度取值范围对所述视频帧子链路集合中每一条视频帧子链路上的候选视频帧进行筛选,得到至少一帧第三候选监控视频帧;以及
将所述第一目标监控视频帧、所述第二目标监控视频帧、所述第一候选监控视频帧、所述第二候选监控视频帧和所述第三候选监控视频帧,分别作为目标监控视频帧。
在其中一个实施例中,在上述目标对象监控方法中,所述对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧的步骤,包括:
对所述多帧监控视频帧进行采样,得到多帧采样监控视频帧;
依次将所述多帧采样监控视频帧中的每一帧采样监控视频帧,确定为候选采样监控视频帧,并获取所述候选采样监控视频帧对应的帧长度信息,其中,所述帧长度信息包括所述候选采样监控视频帧的帧开始时间和所述候选采样监控视频帧的帧结束时间;
获取预设时间修正单位长度和预设时间修正最大长度,其中,所述预设时间修正单位长度小于所述预设时间修正最大长度,且所述预设时间修正最大长度大于所述监控视频帧的帧长;
根据所述候选采样监控视频帧的帧开始时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧开始修正时间,并根据所述候选采样监控视频帧的帧结束时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧结束修正时间;
从所述候选采样监控视频帧的多个帧开始修正时间中,选取多个目标帧开始修正时间,并从所述候选采样监控视频帧的多个帧结束修正时间中,选取每一个所述目标帧开始修正时间对应的目标帧结束修正时间,得到多个目标帧修正时间组;
在所述多帧监控视频帧中,确定每一个所述目标帧修正时间组对应的监控视频帧集合,得到多个监控视频帧集合;
针对每一个所述监控视频帧集合,对该监控视频帧集合包括的监控视频帧进行帧间差分处理,得到对应的差分处理结果,并基于每一个所述监控视频帧集合对应的差分处理结果,在所述多个监控视频帧集合中选择出目标监控视频帧集合;以及
将每一帧所述候选采样监控视频帧对应的目标监控视频帧集合中的监控视频帧,作为目标监控视频帧。
在其中一个实施例中,在上述目标对象监控方法中,所述判断所述对象轨迹信息对应的目标对象是否属于监控对象,判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息的步骤,包括:
获取每一条所述对象轨迹信息对应的轨迹丢失帧数;
判断每一个所述轨迹丢失帧数是否大于预设的帧数阈值;以及
若存在大于所述帧数阈值的轨迹丢失帧数,则判断该轨迹丢失帧数对应的对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息。
本申请实施例还提供了一种监控设备,该监控设备包括:
存储器,用于存储计算机程序;
与所述存储器连接的处理器,用于执行该存储器存储的计算机程序,以实现上述的目标对象监控方法。
本申请提供的目标对象监控方法和监控设备,在判断目标对象是否属于监控对象的基础上,一并判断该目标对象的对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,使得需要在目标对象属于监控对象且轨迹标签信息不属于第一标签信息时,才对目标对象执行警示操作。基于此,由于第一标签信息表征的内容为监控视频中的至少一个目标对象中存在不属于监控对象的目标对象,也就是说,只有在监控对象单独存在的时候,才对监控对象进行警示,使得可以改善现有技术中由于在检测有监控对象时就进行警示操作而导致容易出现误警示(例如,在有非监控对象与监控对象一并出现的情况下,非监控对象本身就可以对监控对象进行监控,此时,就没有必要进行监控)的问题,从而改善现有监控技术中存在的监控效果较差的问题,具有较高的实用价值。
为使本申请的上述目的、特征和优点能更明显易懂,下文列举了具体实施例,并配合所附附图,作详细说明如下。
附图说明
图1为本申请实施例提供的监控设备的结构框图。
图2为本申请实施例提供的目标对象监控方法的流程示意图。
图3为图2的目标对象监控方法的步骤S110的一个子流程示意图。
图4为图2的目标对象监控方法的步骤S110的另一子流程示意图。
图5为图3的目标对象监控方法的步骤S1101的一个子流程示意图。
图6为图5的目标对象监控方法的步骤S1101B的一个子流程示意图。
图7为图5的目标对象监控方法的的步骤S1101B的另一子流程示意图。
图8为图2的的目标对象监控方法的步骤S120的另一子流程示意图。
图9为本申请实施例提供的目标对象监控装置的结构框图。
图标:10-监控设备;12-存储器;14-处理器;100-目标对象监控装置;110-轨迹信息创建模块;120-对象信息判断模块;130-警示操作执行模块。
具体实施方式
为使本申请实施例的目的、技术方案和优点更加清楚,下面将结合本申请实施例中的附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例只是本申请的一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。
因此,以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的一些实施例。基于本申请中的实施例,本领域普通技术人员在没有作出创造性劳动前提下所获得的所有其他实施例,都属于本申请保护的范围。
如图1所示,本申请实施例提供了一种监控设备10,监控设备10可以包括存储器12、处理器14和目标对象监控装置100。
其中,所述存储器12和处理器14之间直接或间接地电性连接,以实现数据的传输或交互。例如,存储器12和处理器14相互之间可通过一条或多条通讯总线或信号线实现电性连接。所述目标对象监控装置100包括可以以软件或固件(firmware)的形式存储于所述存储器12中的至少一个软件功能模块。所述处理器14用于执行所述存储器12中存储的可执行的计算机程序,例如,所述目标对象监控装置100所包括的软件功能模块及计算机程序等,以实现本申请实施例提供的目标对象监控方法。
可选地,所述存储器12可以是,但不限于,随机存取存储器(Random Access Memory,RAM),只读存储器(Read Only Memory,ROM),可编程只读存储器(Programmable Read-Only Memory,PROM),可擦除只读存储器(Erasable Programmable Read-Only Memory,EPROM),电可擦除只读存储器(Electric Erasable Programmable Read-Only Memory,EEPROM)等。
此外,所述处理器14可以是一种通用处理器,包括中央处理器(Central Processing Unit,CPU)、网络处理器(Network Processor,NP)、片上系统(System on Chip,SoC)等;还可以是数字信号处理器(DSP)、专用集成电路(ASIC)、现场可编程门阵列(FPGA)或者其他可编程逻辑器件、分立门或者晶体管逻辑器件、分立硬件组件。
可以理解,图1所示的结构仅为示意,所述监控设备10还可包括比图1中所示更多或者更少的组件,或者具有与图1所示不同的配置,例如,监控设备10还可以包括用于与其它设备(如其它终端设备)进行信息交互的通信单元。
其中,所述监控设备10既可以是与图像采集设备连接的后台服务器,用于通过该图像采集设备获取监控视频,也可以是具备数据处理能力的图像采集设备,以在采集到监控视频时,对该监控视频进行处理。
结合图2,本申请实施例还提供一种可应用于上述监控设备10的目标对象监控方法。其中,所述目标对象监控方法的方法步骤可以由所述监控设备10实现。
下面将对图2所示的具体流程,进行详细阐述。所述目标对象监控方法可包括以下步骤S110至S130。
步骤S110,基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息。
在本实施例中,所述监控设备10可以基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息。如此,针对至少一个目标对象,可以得到至少一条对象轨迹信息。
步骤S120,判断所述对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息。
在本实施例中,在基于步骤S110得到所述至少一条对象轨迹信息之后,所述监控设备10可以判断该对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息。
其中,所述第一标签信息可以表征所述至少一个目标对象中存在不属于监控对象的目标对象。
步骤S130,若判定所述目标对象属于所述监控对象,且所述对象轨迹信息对应的轨迹标签信息不属于所述第一标签信息,则对所述目标对象执行预设的警示操作。
在本实施例中,在基于步骤S120判定所述目标对象属于所述监控对象,且所述对象轨迹信息对应的轨迹标签信息不属于所述第一标签信息之后,所述监控设备10,可以对所述目标对象执行预设的警示操作。
基于上述方法,在判断目标对象是否属于监控对象的基础上,一并判断该目标对象的对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,使得需要在目标对象属于监控对象且轨迹标签信息不属于第一标签信息时,才对目标对象执行警示操作。基于此,由于第一标签信息表征的内容为监控视频中的至少一个目标对象中存在不属于监控对象的目标对象,也就是说,只有在监控对象单独存在的时候,才对监控对象进行警示,使得可以改善现有技术中由于在检测有监控对象时就进行警示操作而导致容易出现误警示(例如,在有非监控对象与监控对象一并出现的情况下,非监控对象本身就可以对监控对象进行监控,此时,就没有必要进行监控)的问题,从而改善现有监控技术中存在的监控效果较差的问题。
并且,基于上述方法,在一种应用场景下,如在监控儿童独自出行的场景中,儿童作为监控对象,成人不作为监控对象,可以实现仅在儿童独自出行的情况下进行警示操作。反之,若儿童和成人一并出行,可以不进行警示,如此,即便发生安全事故,由于有成人陪童,监管人员(如小区物业等)也可以免责,使得具有较高的应用价值。
对于步骤S110需要说明的是,基于获取的监控视频创建对象轨迹信息的具体方式不受限制,可以根据实际应用需求进行选择。
例如,在一种可以替代的示例中,如图3所示,步骤S110可以包括以下步骤:
步骤S1101,获取目标监控视频帧,其中,该目标监控视频帧属于监控视频;
步骤S1102,判断所述目标监控视频帧中是否存在至少一个目标对象(例如,可以对所述目标监控视频帧进行人形检测,以确定是否存在至少一个目标对象,即确定是否存在至少一个行人,其中,人形检测的方法可以包括,但不限于PPYOLO算法等);
步骤S1103,若所述目标监控视频帧中存在至少一个目标对象,则在每一个所述目标对象属于所述监控对象时,判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息,其中,该历史监控视频帧属于所述监控视频;
步骤S1104,若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息(例如,所述目标监控视频帧可以是第一帧监控视频帧,即不存在历史监控视频;或者,存在历史监控视频帧,但是,该历史监控视频帧中不存在目标对象;如此,可以对所述目标监控视频帧中的每一个目标对象分别创建对应的对象轨迹信息;其中,创建对象轨迹信息的具体方式可以是,基于前述的人形检测得到的人形检测框得到)。
可选地,在上述示例中,基于步骤1以获取所述目标监控视频帧的具体方式不受限制,可以根据实际应用需求进行选择。
例如,在一种可以替代的示例中,可以将拍摄目标监控场景得到的每一帧监控视频帧都作为目标监控视频帧,从而有效保证监控的可靠性。
又例如,在另一种可以替代的示例中,为了使得监控设备10的数据处理量降低,使得上述目标对象监控方法可以应用于图像采集设备,即所述监控设备10为图像采集设备,则如图5所示,可以基于以下步骤获取目标监控视频帧:
S1101A,获取拍摄目标监控场景形成的连续多帧监控视频帧;S1101B,对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧。
也就是说,可以将拍摄的多帧监控视频帧中的部分监控视频帧作为目标视频帧,以进行后续的处理,如人形检测等。
可以理解的是,在上述示例中,为了使得在降低数据处理量的基础上,还能保证基于目标监控视频帧进行的监控判断具有较高的可靠度,本申请实施例分别提供以下三种可以替代的示例,以对监控视频帧进行筛选。
例如,在第一种可以替代的示例中,如图6所示,可以基于以下步骤对监控视频帧进行筛选,以得到至少一帧目标监控视频帧:
S1101B1,将所述多帧监控视频帧中的第一帧监控视频帧作为第一目标监控视频帧,将所述多帧监控视频帧中的最后一帧监控视频帧作为第二目标监控视频帧,并将所述多帧监控视频帧中第一帧监控视频帧和最后一帧监控视频帧以外的其它监控视频帧作为候选监控视频帧,得到多帧候选监控视频帧(可以理解的是,第一帧监控视频帧可以是指所述多帧监控视频帧中时序最早的一帧监控视频帧,如拍摄时间最早的一帧监控视频帧;最后一帧监控视频帧可以是指多帧监控视频帧中时序最晚的一帧监控视频帧,如拍摄时间最晚的一帧监控视频帧);
S1101B2,在所述多帧候选监控视频帧中,计算每两帧所述候选监控视频帧之间的帧间差分值(例如,可以基于帧间差分法将两帧候选监控视频帧对应位置的像素点进行像素差值计算,然后,将像素差值的绝对值进行求和计算,得到两帧候选监控视频帧之间的帧间差分值),并基于预设的帧间差分阈值和所述帧间差分值对所述多帧候选监控视频帧进行关联处理(例如,可以判断两帧候选监控视频帧之间的帧间差分值是否大于帧间差分阈值,并在该帧间差分值大于该帧间差分阈值时,将该两帧候选监控视频帧进行关联处理,其中,该帧间差分阈值可以基于用户根据实际应用场景进行的配置操作生成,且在对数据处理量要求较低的应用中,该帧间差分阈值可以较大,如此,可以使得形成的视频帧关联网络较小),形成对应的视频帧关联网络(基于此,在该视频帧关联网络中相互关联的两帧候选监控视频帧之间的帧间差分值大于该帧间差分阈值);
S1101B3,分别计算所述第一目标监控视频帧与每一帧所述候选监控视频帧之间的帧间差分值、所述第二目标监控视频帧与每一帧所述候选监控视频帧之间帧间差分值,并基于所述帧间差分值确定与所述第一目标监控视频帧具有最大关联度的第一候选监控视频帧、与所述第二目标监控视频帧具有最大关联度的第二候选监控视频帧(可以理解的是,与所述第一目标监控视频帧之间具有最大关联度的候选监控视频帧可以是指,与所述第一目标监控视频帧之间具有最大帧间差分值的候选监控视频帧;与所述第二目标监控视频帧之间具有最大关联度的候选监控视频帧可以是指,与所述第二目标监控视频帧之间具有最大帧间差分值的候选监控视频帧);
S1101B4,获取所述视频帧关联网络中连接所述第一候选监控视频帧和所述第二候选监控视频帧的视频帧链路子网络(例如,在所述视频帧关联网络中,若所述第一候选监控视频帧关联有候选监控视频帧A和候选监控视频帧B,该候选监控视频帧A关联有候选监控视频帧C,该候选监控视频帧B和该候选监控视频帧C分别与所述第二候选监控视频帧相关联,如此,可以形成包括该候选监控视频帧A、该候选监控视频帧B和该候选监控视频帧C的视频帧链路子网络),其中,该视频帧链路子网络用于表征该第一候选监控视频帧和该第二候选监控视频帧之间的关联关系;
S1101B5,根据所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络对应的视频帧子链路集合的关联度,确定所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络的目标关联度(例如,基于前述的示例,所述第一候选监控视频帧和所述第二候选监控视频帧之间可以形成两条视频帧子链路,分别为“第一候选监控视频帧、候选监控视频帧A、候选监控视频帧C、第二候选监控视频帧”和“第一候选监控视频帧、候选监控视频帧B、第二候选监控视频帧”,其次,分别计算该第一候选监控视频帧和该第二候选监控视频帧之间关于每一条视频帧子链路的关联度,如对于“第一候选监控视频帧、候选监控视频帧B、第二候选监控视频帧”这一视频帧子链路,该关联度可以为第一候选监控视频帧与候选监控视频帧B之间的帧间差分值和第二候选监控视频帧与候选监控视频帧B之间的帧间差分值的和值,然后,计算每一条视频帧子链路的关联度的加权和值,并将该加权和值作为所述目标关联度,其中,每一条视频帧子链路的关联度的权重系数可以与该视频帧子链路包括的候选监控视频帧的数量具有负相关关系),其中,所述视频帧子链路集合包括所有满足预设的关联度约束条件的视频帧子链路(例如,为了降低数据处理量,该关联度约束条件可以为视频帧子链路包括的候选监控视频帧的数量小于预设值,且在需要较小的数据处理量时,该预设值可以越小);
S1101B6,在所述目标关联度大于预设的关联度阈值时,基于所述视频帧关联网络,获取基于所述第二候选监控视频帧与连接的每一帧候选监控视频帧之间的关联度形成的关联度取值范围(也就是说,在基于前述步骤确定所述目标关联度之后,可以先判断该目标关联度是否大于所述关联度阈值,并在该目标关联度大于该关联度阈值时,基于所述视频帧关联网络,获取基于所述第二候选监控视频帧与连接的每一帧候选监控视频帧之间的关联度形成的关联度取值范围,例如,可以先确定所述第二候选监控视频帧与连接的每一帧候选监控视频帧之间的关联度,然后,基于该关联度的最大值和最小值,从而确定该关联度取值范围;其中,所述关联度阈值可以基于用户根据实际应用场景进行的配置操作生成,且对降低数据处理量的需要越高,该关联度阈值可以越大);
S1101B7,基于所述关联度取值范围对所述视频帧子链路集合中每一条视频帧子链路上的候选视频帧进行筛选,得到至少一帧第三候选监控视频帧(例如,对于“第一候选监控视频帧、候选监控视频帧B、第二候选监控视频帧”这一视频帧子链路,若候选监控视频帧B与第一候选监控视频帧的关联度属于所述关联度取值范围,且候选监控视频帧B与第二候选监控视频帧的关联度属于所述关联度取值范围,则将候选监控视频帧B作为第三候选监控视频帧;也就是说,对于视频帧子链路上的候选视频帧,若该候选视频帧与该视频帧子链路上关联的两帧候选视频帧之间的关联度都属于该关联度取值范围,可以将该候选视频帧作为第三候选监控视频帧);
S1101B8,将所述第一目标监控视频帧、所述第二目标监控视频帧、所述第一候选监控视频帧、所述第二候选监控视频帧和所述第三候选监控视频帧,分别作为目标监控视频帧。
又例如,在第二种可以替代的示例中,可以基于以下步骤对监控视频帧进行筛选,以得到至少一帧目标监控视频帧:
第一步,分别计算所述多帧监控视频中每两帧监控视频帧之间的帧间差分值,并基于该帧间差分值确定所述多帧监控视频帧中与其它监控视频帧之间具有最大关联度的第一监控视频帧,以及与该第一监控视频帧之间具有最大关联度的第二监控视频帧(例如,可以先计算每两帧监控视频帧之间的帧间差分值,然后,可以针对每一帧监控视频帧,计算该监控视频帧与其它监控视频帧之间的帧间差分值的和值,如此,对于多帧监控视频帧,可以得到多个和值;然后,可以确定多个和值中的最大值,并将该最大值对应的监控视频帧作为第一监控视频帧,然后,将与该第一监控视频帧之间具有最大帧间差分值的监控视频帧,作为第二监控视频帧);
第二步,基于预设的帧间差分阈值和所述帧间差分值对所述多帧候选监控视频帧进行关联处理,形成对应的视频帧关联网络(例如,可以将每两帧监控视频帧之间的帧间差分值与 帧间差分阈值进行比较,以确定出大于帧间差分阈值的每一个帧间差分值,再将该帧间差分值对应的两帧监控视频帧进行关联处理;如此,可以使得形成的视频帧关联网络中,连接的两帧监控视频帧之间经过该关联处理);
第三步,根据所述视频帧关联网络,获取与所述第一监控视频帧具有关联关系的监控视频帧,得到第一关联监控视频帧集合;
第四步,根据所述视频帧关联网络,获取与所述第二监控视频帧具有关联关系的监控视频帧,得到第二关联监控视频帧集合;
第五步,确定所述第一关联监控视频帧集合和所述第二关联监控视频帧集合的并集,并将该并集作为候选监控视频帧集合;
第六步,分别统计所述候选监控视频帧集合中每一帧所述候选监控视频帧与所述第一监控视频帧在所述视频帧关联网络中的视频帧关联链路,得到每一帧所述候选监控视频帧的第一链路关联度表征值,其中,所述第一链路关联度表征值基于所述候选监控视频帧对应的每一条视频帧关联链路的链路关联度加权得到(例如,对于候选监控视频帧集合中的候选监控视频帧1,该候选监控视频帧1关联有候选监控视频帧2,该候选监控视频帧2关联有所述第一监控视频帧,如此,可以形成一条视频关联链路;并且,该候选监控视频帧1还关联有候选监控视频帧3,该候选监控视频帧3关联有所述第一监控视频帧,如此,也可以形成一条视频关联链路;基于此,可以先分别计算这两条视频关联链路的链路关联度,再对两个链路关联度进行加权计算;其中,一条视频关联链路的链路关联度可以是,该视频关联链路上的每两帧候选监控视频帧之间的帧间差分值的平均值),且每一条视频帧关联链路的链路关联度的权重系数基于每一条视频帧关联链路的链路长度确定(例如,该权重系数与该链路长度之间可以具有负相关关系);
第七步,分别统计所述候选监控视频帧集合中每一帧所述候选监控视频帧与所述第二监控视频帧在所述视频帧关联网络中的视频帧关联链路,得到每一帧所述候选监控视频帧的第二链路关联度表征值,其中,所述第二链路关联度表征值基于所述候选监控视频帧对应的每一条视频帧关联链路的链路关联度加权得到,且每一条视频帧关联链路的链路关联度基于每一条视频帧关联链路的链路长度确定(如前述步骤,在此不再一一赘述);
第八步,根据所述第一链路关联度表征值和所述第二链路关联度表征值分别计算所述候选监控视频帧集合中每一帧所述候选监控视频帧的链路关联度表征值(例如,对于所述候选监控视频帧集合中一帧候选监控视频帧,可以计算该候选监控视频帧对应的第一链路关联度表征值与对应的第二链路表征值之间的平均值,并将该平均值作为该候选监控视频帧的链路关联度表征值);
第九步,基于所述链路关联度表征值,对所述候选监控视频帧集合中每一帧所述候选监控视频帧进行筛选,得到至少一帧第三监控视频帧(例如,可以将链路关联度表征值最大的一帧或多帧候选监控视频帧,作为第三监控视频帧;或者,可以将链路关联度表征值大于预设表征值的候选监控视频帧,作为第三监控视频帧);
第十步,将所述第一监控视频帧、所述第二监控视频帧和所述至少一帧第三监控视频帧,分别作为目标监控视频帧。
可以理解的是,在上述示例中,可以将两帧监控视频帧之间的帧间差分值作为该两帧监控视频帧之间的关联度。
再例如,在第三种可以替代的示例中,如图7所示,可以基于以下步骤对监控视频帧进行筛选,以得到至少一帧目标监控视频帧:
S1101B1’,对所述多帧监控视频帧进行采样,得到多帧采样监控视频帧(例如,可以对所述多帧监控视频帧进行等间隔采样);
S1101B2’,依次将所述多帧采样监控视频帧中的每一帧采样监控视频帧,确定为候选采样监控视频帧,并获取所述候选采样监控视频帧对应的帧长度信息,其中,所述帧长度信息包括所述候选采样监控视频帧的帧开始时间和所述候选采样监控视频帧的帧结束时间(例如, 对于一帧所述候选采样监控视频帧,该候选采样监控视频帧的帧开始时间可以为9时15分0.1秒,帧结束时间可以为9时15分0.15秒,如此,该候选采样监控视频帧的帧长可以为0.05秒);
S1101B3’,获取预设时间修正单位长度和预设时间修正最大长度,其中,所述预设时间修正单位长度小于所述预设时间修正最大长度,且所述预设时间修正最大长度大于所述监控视频帧的帧长(其中,对视频帧进行筛选的精度要求越高,该预设时间修正单位长度可以越小,该预设时间修正最大长度可以越大;反之,对视频帧筛选的效率要求越高,或对降低数据处理量的需求越高,该预设时间修正单位长度可以越大,该预设时间修正最大长度可以越小;其中,该预设时间修正单位长度和该预设时间修正最大长度的具体数值可以基于用户根据实际应用场景进行的配置操作生成,如前所述,所述监控视频帧的帧长为0.05S,对应的该预设时间修正单位长度可以为0.03S,该预设时间修正最大长度可以为0.09S);
S1101B4’,根据所述候选采样监控视频帧的帧开始时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧开始修正时间(如对于帧开始时间“9时15分0.1秒”,得到的帧开始修正时间可以包括9时15分0.07秒、9时15分0.04秒、9时15分0.01秒、9时15分0.13秒等),并根据所述候选采样监控视频帧的帧结束时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧结束修正时间(如对于帧开始时间“9时15分0.15秒”,得到的帧开始修正时间可以包括9时15分0.18秒、9时15分0.21秒、9时15分0.24秒、9时15分0.12秒等);
S1101B5’,从所述候选采样监控视频帧的多个帧开始修正时间中,选取多个目标帧开始修正时间(例如,可以随机选择出部分帧开始修正时间作为目标帧开始修正时间,也可以将全部的帧开始修正时间作为目标帧开始修正时间),并从所述候选采样监控视频帧的多个帧结束修正时间中,选取每一个所述目标帧开始修正时间对应的目标帧结束修正时间(例如,可以针对每一个目标帧开始修正时间,在所述多个帧结束修正时间中选择一个帧结束修正时间,作为该目标帧开始修正时间对应的目标帧结束修正时间,其中,该目标帧结束修正时间与该目标帧开始修正时间之间的差值大于或等于所述监控视频帧的帧长),得到多个目标帧修正时间组;
S1101B6’,在所述多帧监控视频帧中,确定每一个所述目标帧修正时间组对应的监控视频帧集合,得到多个监控视频帧集合(也就是说,针对每一个所述目标帧修正时间组,将所述多帧监控视频帧中帧长度信息与该目标帧修正时间组之间有交集的每一帧监控视频帧,作为该目标帧修正时间组对应的监控视频帧集合的一部分;如此,针对多个目标帧修正时间组,可以得到多个监控视频帧集合);
S1101B7’,针对每一个所述监控视频帧集合,对该监控视频帧集合包括的监控视频帧进行帧间差分处理,得到对应的差分处理结果,并基于每一个所述监控视频帧集合对应的差分处理结果,在所述多个监控视频帧集合中选择出目标监控视频帧集合(例如,针对一个监控视频帧集合,可以计算该监控视频帧集合中每两帧监控视频帧之间的帧间差分值,然后,再计算该帧间差分值的平均值;如此,对于多个监控视频帧集合,可以得到多个平均值;然后,可以将平均值最大的监控视频帧集合作为目标监控视频帧集合;或者,可以将平均值大于阈值的监控视频帧集合作为目标监控视频帧集合,其中,该阈值可以是该多个平均值的均值);
S1101B8’,将每一帧所述候选采样监控视频帧对应的目标监控视频帧集合中的监控视频帧,作为目标监控视频帧。
在上述示例的基础上,对于步骤S110还需要说明的是,基于不同的需求,步骤S110还可以包括其它不同的步骤。
例如,在一种可以替代的示例中,为了提高进行警示操作的精度,在执行上述的步骤2之后,若所述目标监控视频帧中不存在目标对象,则如图3中所示,步骤S110还可以包括以下步骤:
S1105,判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;S1106,若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一条所述对象轨迹信息对应的轨迹丢失帧数进行更新,其中,该轨迹丢失帧数用于判断是否执行所述警示操作(该轨迹丢失帧数的具体作用,可以结合后文所述)。
例如,在一种具体的应用示例中,若所述目标监控视频帧中不存在行人,可以先判断是否已经创建有至少一条对象轨迹信息。然后,在已经创建有至少一条对象轨迹信息时,可以对每一条所述对象轨迹信息对应的轨迹丢失帧数进行更新,如加1,如此,可以表明当前监控的行人没有在目标监控场景内,即确认在当前时刻行人丢失。
在上述示例的基础上,对于步骤S110还需要说明的是,为了避免进行不必要的警示操作而导致资源浪费的问题,在一种可以替代的示例中,在执行上述的步骤2之后,若所述目标监控视频帧中存在至少一个目标对象,且该至少一个目标对象中存在不属于所述监控对象的目标对象,则如图4所示,步骤S110还可以包括以下步骤:
S1107,判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;S1108,若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则将每一条所述对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息;S1109,若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息,并将得到的每一条对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息。
例如,在一种具体的应用示例中,若所述目标监控视频帧中存在至少一个行人,儿童作为所述监控对象,且该至少一个行人中存在成年人,如此,可以先判断是否已经创建有至少一条对象轨迹信息。如果已经创建有至少一条对象轨迹信息,由于目标对象中存在成年人,因而,可以将该至少一条对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息,即表明行人中存在成年人,可以不用进行警示操作。
可以理解的是,将所述轨迹标签信息配置为所述第一标签信息可以是指,在该轨迹标签信息已经属于第一标签信息时维持该第一标签信息,在该轨迹标签信息不属于第一标签信息时更改为第一标签信息。
在上述示例的基础上,对于步骤S110还需要说明的是,考虑到在执行上述的步骤3之后,判定结果可以为已经基于历史监控视频帧创建有至少一条对象轨迹信息,则如图3所示,步骤S110还可以包括以下步骤:
S11010,将所述至少一条对象轨迹信息与至少一个目标对象进行对象匹配处理;S11011,若存在与所述至少一个目标对象中的每一个目标对象不匹配的对象轨迹信息,则对该对象轨迹信息对应的轨迹丢失帧数进行更新,其中,该轨迹丢失帧数用于判断是否执行所述警示操作;S11012,若存在与所述至少一条对象轨迹信息中每一条对象轨迹信息不匹配的目标对象,则基于该目标对象创建对应的对象轨迹信息;若存在与所述至少一条对象轨迹信息中的一条对象轨迹信息匹配的目标对象,则将该目标对象添加至匹配的对象轨迹信息中。
例如,在一种具体的应用示例中,若已经创建有2条对象轨迹信息,则将该2条对象轨迹信息与所述目标监控视频帧中的行人进行匹配处理。然后,若目标监控视频帧中有1个行人,则存在与行人不匹配的对象轨迹信息,表明该对象轨迹信息的行为在目标视频帧中丢失,因而,可以将该对象轨迹信息对应的轨迹丢失帧数进行更新,如加1。或者,若目标监控视频帧中有3个行人,则存在与对象轨迹信息不匹配的行人,表明该行人是第一次出现,因而,可以为该行人创建对应的对象轨迹信息。或者,若存在与对象轨迹信息匹配的行人,可以将该行人添加到该对象轨迹信息中,例如,若检测行人的方式为人形检测,可以将检测得到的人形检测框添加至该对象轨迹信息中。如此,对于多帧监控视频帧,在一条对象轨迹信息中可以包括具有先后关系的多个人形检测框。
对于步骤S120需要说明的是,判断所述目标对象是否属于监控对象和所述轨迹标签信息是否属于第一标签信息的具体方式不受限制,可以根据实际应用需求进行选择。
例如,在一种可以替代的示例中,如图8所示,步骤S120可以包括以下步骤:
S1201,获取每一条所述对象轨迹信息对应的轨迹丢失帧数;S1202,判断每一个所述轨迹丢失帧数是否大于预设的帧数阈值;S1203,若存在大于所述帧数阈值的轨迹丢失帧数,则判断该轨迹丢失帧数对应的对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息。
例如,在一种具体的应用示例中,可以先获取每一个行人对应的对象轨迹信息对应的轨迹丢失帧数(如前所述,例如,若在获取到的第一帧监控视频帧中为行人A创建有对象轨迹信息,在之后的3帧监控视频帧中都不存在行人A,则行人A对应的轨迹丢失帧数为3),如此,对于至少一个行人,可以得到至少一个轨迹丢失帧数(若获取的每一帧监控视频帧中都存在特定行人,侧该特定行人的轨迹丢失帧数为0)。其次,可以判断每一个轨迹丢失帧数是否大于预设帧数阈值。然后,对于大于预设帧数阈值的轨迹丢失帧数,可以判断对应的行人是否属于儿童,并判断对应的轨迹标签信息是否属于第一标签信息,即判断对应的行人是否属于儿童以及与该行人一起的其它行人是否属于儿童。如此,若一个行人属于儿童,且不存在一起行动的其它行人或一起行动的其它行人也属于儿童,可以判定需要对该行人执行预设的警示操作。
可选地,在上述示例中,判断所述目标对象是否属于监控对象的具体方式不受限制,可以根据实际应用需求进行选择。
例如,在一种可以替代的示例中,若所述监控对象为儿童,为了能够对目标对象是否为儿童进行可靠的判断,可以先计算目标对象的身高信息,如基于人形检测方法中的人形检测框的高度信息,确定目标对象的身高信息,然后,再将该身高信息与儿童的身高阈值信息进行比较,从而确定目标对象是否属于儿童。
对于步骤S130需要说明的是,执行所述警示操作的具体方式不受限制,可以根据实际应用需求进行选择。
例如,在一种可以替代的示例中,可以向监控人员的终端设备输出警示信息。又例如,在另一种可以替代的示例中,若能够确定警示操作对应的目标对象的监护人,可以向该监护人的终端设备输出警示信息。
在上述示例的基础上,若执行步骤S120判定出所述目标对象不属于监控对象和/或所述轨迹标签信息属于第一标签信息,可以选择不执行所述警示操作。并且,为了节约存储资源等,还可以将所述对象轨迹信息删除。
在上述示例的基础上,在执行步骤S130之后,即在执行所述警示操作之后,为了节约存储资源等,还可以将所述对象轨迹信息删除。
结合图9,本申请实施例还提供一种可应用于上述监控设备10的目标对象监控装置100。其中,所述目标对象监控装置100可以包括轨迹信息创建模块110、对象信息判断模块120和警示操作执行模块130。
所述轨迹信息创建模块110,用于基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息。在本实施例中,所述轨迹信息创建模块110可用于执行图2所示的步骤S110,关于所述轨迹信息创建模块110可执行的相关内容可以参照前文对步骤S110的描述。
所述对象信息判断模块120,用于判断所述对象轨迹信息对应的目标对象是否属于监控对象,并判断该对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,其中,该第一标签信息表征所述至少一个目标对象中存在不属于监控对象的目标对象。在本实施例中,所述对象信息判断模块120可用于执行图2所示的步骤S120,关于所述对象信息判断模块120可执行的相关内容可以参照前文对步骤S120的描述。
所述警示操作执行模块130,用于若所述目标对象属于所述监控对象,且所述对象轨迹信息对应的轨迹标签信息不属于所述第一标签信息,则对该目标对象执行预设的警示操作。在本实施例中,所述警示操作执行模块130可用于执行图2所示的步骤S130,关于所述警示操作执行模块130可执行的相关内容可以参照前文对步骤S130的描述。
在本申请实施例中,对应于上述的目标对象监控方法,还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有计算机程序,该计算机程序运行时执行目标对象监控方法的各个步骤。
其中,前述计算机程序运行时执行的各步骤,在此不再一一赘述,可参考前文对所述目标对象监控方法的解释说明。
综上所述,本申请提供的目标对象监控方法和监控设备,在判断目标对象是否属于监控对象的基础上,一并判断该目标对象的对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,使得需要在目标对象属于监控对象且轨迹标签信息不属于第一标签信息时,才对目标对象执行警示操作。基于此,由于第一标签信息表征的内容为监控视频中的至少一个目标对象中存在不属于监控对象的目标对象,也就是说,只有在监控对象单独存在的时候,才对监控对象进行警示,使得可以改善现有技术中由于在检测有监控对象时就进行警示操作而导致容易出现误警示(例如,在有非监控对象与监控对象一并出现的情况下,非监控对象本身就可以对监控对象进行监控,此时,就没有必要进行监控)的问题,从而改善现有监控技术中存在的监控效果较差的问题,具有较高的实用价值。
在本申请实施例所提供的几个实施例中,应该理解到,所揭露的装置和方法,也可以通过其它的方式实现。以上所描述的装置和方法实施例仅仅是示意性的,例如,附图中的流程图和框图显示了根据本申请的多个实施例的装置、方法和计算机程序产品的可能实现的体系架构、功能和操作。在这点上,流程图或框图中的每个方框可以代表一个模块、程序段或代码的一部分,所述模块、程序段或代码的一部分包含一个或多个用于实现规定的逻辑功能的可执行指令。也应当注意,在有些作为替换的实现方式中,方框中所标注的功能也可以以不同于附图中所标注的顺序发生。例如,两个连续的方框实际上可以基本并行地执行,它们有时也可以按相反的顺序执行,这依所涉及的功能而定。也要注意的是,框图和/或流程图中的每个方框、以及框图和/或流程图中的方框的组合,可以用执行规定的功能或动作的专用的基于硬件的系统来实现,或者可以用专用硬件与计算机指令的组合来实现。
另外,在本申请各个实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
所述功能如果以软件功能模块的形式实现并作为独立的产品销售或使用时,可以存储在一个计算机可读取存储介质中。基于这样的理解,本申请的技术方案本质上或者说对现有技术做出贡献的部分或者该技术方案的部分可以以软件产品的形式体现出来,该计算机软件产品存储在一个存储介质中,包括若干指令用以使得一台计算机设备(可以是个人计算机,电子设备,或者网络设备等)执行本申请各个实施例所述方法的全部或部分步骤。而前述的存储介质包括:U盘、移动硬盘、只读存储器(ROM,Read-Only Memory)、随机存取存储器(RAM,Random Access Memory)、磁碟或者光盘等各种可以存储程序代码的介质。需要说明的是,在本文中,术语“包括”、“包含”或者其任何其他变体意在涵盖非排他性的包含,从而使得包括一系列要素的过程、方法、物品或者设备不仅包括那些要素,而且还包括没有明确列出的其他要素,或者是还包括为这种过程、方法、物品或者设备所固有的要素。在没有更多限制的情况下,由语句“包括一个……”限定的要素,并不排除在包括所述要素的过程、方法、物品或者设备中还存在另外的相同要素。
以上所述仅为本申请的优选实施例而已,并不用于限制本申请,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。

Claims (10)

  1. 一种目标对象监控方法,其特征在于,包括:
    基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息;
    判断所述对象轨迹信息对应的目标对象是否属于监控对象,并判断所述对象轨迹信息对应的轨迹标签信息是否属于第一标签信息,其中,所述第一标签信息表征所述至少一个目标对象中存在不属于监控对象的目标对象;以及
    若所述目标对象属于所述监控对象,且所述对象轨迹信息对应的轨迹标签信息不属于所述第一标签信息,则对所述目标对象执行预设的警示操作。
  2. 根据权利要求1所述的目标对象监控方法,其特征在于,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,包括:
    获取目标监控视频帧,其中,所述目标监控视频帧属于监控视频;
    判断所述目标监控视频帧中是否存在至少一个目标对象;
    若所述目标监控视频帧中存在至少一个目标对象,则在每一个所述目标对象属于所述监控对象时,判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息,其中,所述历史监控视频帧属于所述监控视频;以及
    若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息。
  3. 根据权利要求2所述的目标对象监控方法,其特征在于,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
    若所述目标监控视频帧中不存在目标对象,则判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;以及
    若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一条所述对象轨迹信息对应的轨迹丢失帧数进行更新,其中,所述轨迹丢失帧数用于判断是否执行所述警示操作。
  4. 根据权利要求2所述的目标对象监控方法,其特征在于,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
    若所述目标监控视频帧中存在至少一个目标对象,且所述至少一个目标对象中存在不属于所述监控对象的目标对象,则判断是否已经基于历史监控视频帧创建有至少一条对象轨迹信息;
    若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则将每一条所述对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息;以及
    若未基于历史监控视频帧创建有至少一条对象轨迹信息,则对每一个所述目标对象分别创建对应的对象轨迹信息,并将得到的每一条对象轨迹信息对应的轨迹标签信息配置为所述第一标签信息。
  5. 根据权利要求2所述的目标对象监控方法,其特征在于,所述基于获取的监控视频中的至少一个目标对象创建对应的对象轨迹信息,得到至少一条对象轨迹信息的步骤,还包括:
    若已经基于历史监控视频帧创建有至少一条对象轨迹信息,则将所述至少一条对象轨迹信息与至少一个目标对象进行对象匹配处理;
    若存在与所述至少一个目标对象中的每一个目标对象不匹配的对象轨迹信息,则对所述对象轨迹信息对应的轨迹丢失帧数进行更新,其中,所述轨迹丢失帧数用于判断是否执行所述警示操作;
    若存在与所述至少一条对象轨迹信息中每一条对象轨迹信息不匹配的目标对象,则基于所述目标对象创建对应的对象轨迹信息;以及
    若存在与所述至少一条对象轨迹信息中的一条对象轨迹信息匹配的目标对象,则将所述目标对象添加至匹配的对象轨迹信息中。
  6. 根据权利要求2所述的目标对象监控方法,其特征在于,所述获取目标监控视频帧的 步骤,包括:
    获取拍摄目标监控场景形成的连续多帧监控视频帧;以及
    对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧。
  7. 根据权利要求6所述的目标对象监控方法,其特征在于,所述对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧的步骤,包括:
    将所述多帧监控视频帧中的第一帧监控视频帧作为第一目标监控视频帧,将所述多帧监控视频帧中的最后一帧监控视频帧作为第二目标监控视频帧,并将所述多帧监控视频帧中第一帧监控视频帧和最后一帧监控视频帧以外的其它监控视频帧作为候选监控视频帧,得到多帧候选监控视频帧;
    在所述多帧候选监控视频帧中,计算每两帧所述候选监控视频帧之间的帧间差分值,并基于预设的帧间差分阈值和所述帧间差分值对所述多帧候选监控视频帧进行关联处理,形成对应的视频帧关联网络;
    分别计算所述第一目标监控视频帧与每一帧所述候选监控视频帧之间的帧间差分值、所述第二目标监控视频帧与每一帧所述候选监控视频帧之间帧间差分值,并基于所述帧间差分值确定与所述第一目标监控视频帧具有最大关联度的第一候选监控视频帧、与所述第二目标监控视频帧具有最大关联度的第二候选监控视频帧;
    获取所述视频帧关联网络中连接所述第一候选监控视频帧和所述第二候选监控视频帧的视频帧链路子网络,其中,所述视频帧链路子网络用于表征所述第一候选监控视频帧和所述第二候选监控视频帧之间的关联关系;
    根据所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络对应的视频帧子链路集合的关联度,确定所述第一候选监控视频帧和所述第二候选监控视频帧相对于所述视频帧链路子网络的目标关联度,其中,所述视频帧子链路集合包括所有满足预设的关联度约束条件的视频帧子链路;
    在所述目标关联度大于预设的关联度阈值时,基于所述视频帧关联网络,获取基于所述第二候选监控视频帧与连接的每一帧候选监控视频帧之间的关联度形成的关联度取值范围;
    基于所述关联度取值范围对所述视频帧子链路集合中每一条视频帧子链路上的候选视频帧进行筛选,得到至少一帧第三候选监控视频帧;以及
    将所述第一目标监控视频帧、所述第二目标监控视频帧、所述第一候选监控视频帧、所述第二候选监控视频帧和所述第三候选监控视频帧,分别作为目标监控视频帧。
  8. 根据权利要求6所述的目标对象监控方法,其特征在于,所述对所述多帧监控视频帧进行筛选,得到至少一帧目标监控视频帧的步骤,包括:
    对所述多帧监控视频帧进行采样,得到多帧采样监控视频帧;
    依次将所述多帧采样监控视频帧中的每一帧采样监控视频帧,确定为候选采样监控视频帧,并获取所述候选采样监控视频帧对应的帧长度信息,其中,所述帧长度信息包括所述候选采样监控视频帧的帧开始时间和所述候选采样监控视频帧的帧结束时间;
    获取预设时间修正单位长度和预设时间修正最大长度,其中,所述预设时间修正单位长度小于所述预设时间修正最大长度,且所述预设时间修正最大长度大于所述监控视频帧的帧长;
    根据所述候选采样监控视频帧的帧开始时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧开始修正时间,并根据所述候选采样监控视频帧的帧结束时间、所述预设时间修正单位长度和所述预设时间修正最大长度,确定所述候选采样监控视频帧对应的多个帧结束修正时间;
    从所述候选采样监控视频帧的多个帧开始修正时间中,选取多个目标帧开始修正时间,并从所述候选采样监控视频帧的多个帧结束修正时间中,选取每一个所述目标帧开始修正时间对应的目标帧结束修正时间,得到多个目标帧修正时间组;
    在所述多帧监控视频帧中,确定每一个所述目标帧修正时间组对应的监控视频帧集合, 得到多个监控视频帧集合;
    针对每一个所述监控视频帧集合,对所述监控视频帧集合包括的监控视频帧进行帧间差分处理,得到对应的差分处理结果,并基于每一个所述监控视频帧集合对应的差分处理结果,在所述多个监控视频帧集合中选择出目标监控视频帧集合;以及
    将每一帧所述候选采样监控视频帧对应的目标监控视频帧集合中的监控视频帧,作为目标监控视频帧。
  9. 根据权利要求1-8任意一项所述的目标对象监控方法,其特征在于,所述判断所述对象轨迹信息对应的目标对象是否属于监控对象,判断所述对象轨迹信息对应的轨迹标签信息是否属于第一标签信息的步骤,包括:
    获取每一条所述对象轨迹信息对应的轨迹丢失帧数;
    判断每一个所述轨迹丢失帧数是否大于预设的帧数阈值;以及
    若存在大于所述帧数阈值的轨迹丢失帧数,则判断所述轨迹丢失帧数对应的对象轨迹信息对应的目标对象是否属于监控对象,并判断所述对象轨迹信息对应的轨迹标签信息是否属于第一标签信息。
  10. 一种监控设备,其特征在于,包括:
    存储器,用于存储计算机程序;
    与所述存储器连接的处理器,用于执行所述存储器存储的计算机程序,以实现权利要求1-9任意一项所述的目标对象监控方法。
PCT/CN2022/080927 2021-03-15 2022-03-15 目标对象监控方法和监控设备 WO2022194147A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202110274089.0 2021-03-15
CN202110274089.0A CN112689132B (zh) 2021-03-15 2021-03-15 目标对象监控方法和监控设备

Publications (1)

Publication Number Publication Date
WO2022194147A1 true WO2022194147A1 (zh) 2022-09-22

Family

ID=75455569

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/080927 WO2022194147A1 (zh) 2021-03-15 2022-03-15 目标对象监控方法和监控设备

Country Status (2)

Country Link
CN (1) CN112689132B (zh)
WO (1) WO2022194147A1 (zh)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112689132B (zh) * 2021-03-15 2021-05-18 成都点泽智能科技有限公司 目标对象监控方法和监控设备
CN114863364B (zh) * 2022-05-20 2023-03-07 碧桂园生活服务集团股份有限公司 一种基于智能视频监控的安防检测方法及系统
CN114897973B (zh) * 2022-07-15 2022-09-16 腾讯科技(深圳)有限公司 轨迹检测方法、装置、计算机设备和存储介质

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018042105A (ja) * 2016-09-07 2018-03-15 東芝テリー株式会社 監視画像処理装置及び監視画像処理方法
CN108965826A (zh) * 2018-08-21 2018-12-07 北京旷视科技有限公司 监控方法、装置、处理设备及存储介质
CN110795963A (zh) * 2018-08-01 2020-02-14 深圳云天励飞技术有限公司 一种基于人脸识别的监控方法、装置及设备
WO2020235819A1 (ko) * 2019-05-17 2020-11-26 Jeong Tae Woong 인공지능을 이용한 영상 기반의 실시간 침입 감지 방법 및 감시카메라
CN112689132A (zh) * 2021-03-15 2021-04-20 成都点泽智能科技有限公司 目标对象监控方法和监控设备

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8614744B2 (en) * 2008-07-21 2013-12-24 International Business Machines Corporation Area monitoring using prototypical tracks
CN105551188A (zh) * 2016-02-04 2016-05-04 武克易 具有看护功能的物联网智能设备实现方法
EP3435665A4 (en) * 2016-03-25 2019-03-20 Panasonic Intellectual Property Management Co., Ltd. MONITORING AND MONITORING SYSTEM
CN106157331A (zh) * 2016-07-05 2016-11-23 乐视控股(北京)有限公司 一种吸烟检测方法和装置
JP7176868B2 (ja) * 2018-06-28 2022-11-22 セコム株式会社 監視装置
WO2020145883A1 (en) * 2019-01-10 2020-07-16 Hitachi, Ltd. Object tracking systems and methods for tracking an object
CN110929619A (zh) * 2019-11-15 2020-03-27 云从科技集团股份有限公司 一种基于图像处理的目标对象追踪方法、系统、设备及可读介质
CN111914661A (zh) * 2020-07-06 2020-11-10 广东技术师范大学 异常行为识别方法、目标异常识别方法、设备及介质
CN112200085A (zh) * 2020-10-10 2021-01-08 上海明略人工智能(集团)有限公司 人流数据的获取方法和装置及存储介质

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018042105A (ja) * 2016-09-07 2018-03-15 東芝テリー株式会社 監視画像処理装置及び監視画像処理方法
CN110795963A (zh) * 2018-08-01 2020-02-14 深圳云天励飞技术有限公司 一种基于人脸识别的监控方法、装置及设备
CN108965826A (zh) * 2018-08-21 2018-12-07 北京旷视科技有限公司 监控方法、装置、处理设备及存储介质
WO2020235819A1 (ko) * 2019-05-17 2020-11-26 Jeong Tae Woong 인공지능을 이용한 영상 기반의 실시간 침입 감지 방법 및 감시카메라
CN112689132A (zh) * 2021-03-15 2021-04-20 成都点泽智能科技有限公司 目标对象监控方法和监控设备

Also Published As

Publication number Publication date
CN112689132A (zh) 2021-04-20
CN112689132B (zh) 2021-05-18

Similar Documents

Publication Publication Date Title
WO2022194147A1 (zh) 目标对象监控方法和监控设备
JP2020501476A (ja) ネットワークにおけるトラフィックの異常を検出するための方法および装置
JP5644097B2 (ja) 画像処理装置、画像処理方法及びプログラム
US9369364B2 (en) System for analysing network traffic and a method thereof
KR102002812B1 (ko) 객체 검출을 위한 영상분석 서버장치 및 방법
US9524223B2 (en) Performance metrics of a computer system
CN109360362A (zh) 一种铁路视频监控识别方法、系统和计算机可读介质
CN110647818A (zh) 一种遮挡目标物体的识别方法及装置
TW201537516A (zh) 基於小腦模型網路的移動物體偵測方法及其裝置
US8661113B2 (en) Cross-cutting detection of event patterns
KR20190079110A (ko) 자가학습 기반의 모니터링 영상 분석 장치 및 방법
US20170024998A1 (en) Setting method and apparatus for surveillance system, and computer-readable recording medium
CN113673311A (zh) 一种交通异常事件检测方法、设备及计算机存储介质
CN113792691A (zh) 一种视频识别方法、系统、设备及介质
CN111400114A (zh) 基于深度递归网络大数据计算机系统故障检测方法及系统
WO2019149143A1 (zh) 一种链路带宽利用率获取方法和装置、终端
US20120120309A1 (en) Transmission apparatus and transmission method
CN110942583A (zh) 烟感告警上报的方法、装置及终端
US20120163212A1 (en) Apparatus and method for detecting abnormal traffic
US9049429B2 (en) Connection problem determination method and connection problem determination apparatus for image input device
TWI706381B (zh) 影像物件偵測方法及系統
US20210192905A1 (en) Mitigating effects caused by repeated and/or sporadic movement of objects in a field of view
US20200252587A1 (en) Video camera
JP2012511194A (ja) 画像の動き検出方法および装置
CN115665369B (zh) 视频处理方法、装置、电子设备和存储介质

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22770500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22770500

Country of ref document: EP

Kind code of ref document: A1