WO2022105243A1 - 事件检测方法、装置、电子设备及存储介质 - Google Patents

事件检测方法、装置、电子设备及存储介质 Download PDF

Info

Publication number
WO2022105243A1
WO2022105243A1 PCT/CN2021/103735 CN2021103735W WO2022105243A1 WO 2022105243 A1 WO2022105243 A1 WO 2022105243A1 CN 2021103735 W CN2021103735 W CN 2021103735W WO 2022105243 A1 WO2022105243 A1 WO 2022105243A1
Authority
WO
WIPO (PCT)
Prior art keywords
scene
target
preset
key frame
sub
Prior art date
Application number
PCT/CN2021/103735
Other languages
English (en)
French (fr)
Inventor
张游春
Original Assignee
北京旷视科技有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京旷视科技有限公司 filed Critical 北京旷视科技有限公司
Publication of WO2022105243A1 publication Critical patent/WO2022105243A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Definitions

  • the present application relates to the field of computer network technologies, and in particular, to an event detection method, apparatus, electronic device, and storage medium.
  • a single target (person, vehicle, object, animal, etc.) or a combination of multiple targets (target 1 + target 2 + target 3 +... + target n) that meet the characteristics will be identified through the item recognition technology. Presented or pushed as detection results.
  • the purpose of the embodiments of the present application is to provide an event detection method, apparatus, electronic device, and storage medium, which can improve the accuracy of detecting whether an event occurs.
  • the embodiment of the present application provides an event detection method, including:
  • Whether the event corresponding to the preset scene area occurs is determined according to whether the scene target exists and according to the actual position and/or the stay duration.
  • the detecting whether there is a scene target corresponding to the event in the at least one key frame image includes:
  • Whether there is a scene object corresponding to the event in the at least one key frame image is detected by a scene object detection model.
  • the detecting whether there is a scene target corresponding to the event in the at least one key frame image by using a scene target detection model includes:
  • the scene target detection model is trained by the following method:
  • annotation information includes classification information and location information of each sub-scene object that constitutes the scene object in the sample image, and the location of the scene objects in the sample image information;
  • label assignment is performed on the preset frame or position point corresponding to the sample image, and a sample label of the preset frame or position point corresponding to the sample image is obtained;
  • the iterative steps include: inputting the sample image into an initial scene target detection model to obtain an initial detection result; determining a loss value according to the initial detection result, label information and the sample label; value, the parameters of the initial scene target detection model are updated to obtain the updated initial scene target detection model; the scene target detection model obtained after the loss is converged is used as the scene target detection model.
  • the scene target includes multiple sub-targets
  • Detecting whether there is a scene target corresponding to the event in the at least one key frame image by using a scene target detection model including:
  • Detect the at least one key frame image by using the scene target detection model, and obtain the sub-target type and/or the number of sub-targets existing in the at least one key frame image;
  • the sub-target type and/or the number of sub-targets existing in the at least one key frame image satisfies at least one of the following conditions, it is determined that a scene target corresponding to the city management event exists in the at least one key frame image :
  • the sub-target category existing in the at least one key frame image is greater than a preset category threshold
  • the number of sub-targets existing in the at least one key frame image is greater than a preset number threshold
  • the target quantity of the specific sub-target category existing in the at least one key frame image is greater than a preset quantity threshold.
  • the determining the actual position and/or the staying time of the scene target in the preset scene area includes at least one of the following:
  • the at least one key frame image is detected by the scene target detection model, and the actual position of the scene target in the preset scene area is determined according to the position of the scene target included in the detected output result;
  • the at least one key frame image is detected by the scene target detection model, and according to the category of the sub-target, the confidence level of the sub-target and the position information of the sub-target included in the detected output result, it is determined that the scene target is in the preset Set the actual position in the scene area;
  • the at least one key frame image is detected by the scene target detection model, and the number of frames of key frame images in which the scene target continuously appears is used as the stay duration.
  • the judging whether the urban management event corresponding to the preset scene area occurs according to the actual location and/or the stay duration includes:
  • the actual position is located in the preset area range of the preset scene area and/or the stay duration is greater than the preconfigured target duration; then it is determined that the city management event occurs.
  • whether the actual position is located in the preset area range of the preset scene area is determined by the following method:
  • the number of frames of key frame images in which the scene target continuously appears is used as the stay duration
  • Whether the stay duration is longer than the preconfigured target duration is determined by the following method: if the stay duration is longer than the preset number of frames, it is determined that the stay duration is longer than the preconfigured target duration.
  • the acquiring at least one key frame image according to the video stream data includes:
  • the video stream data is decoded at a preset decoding frame rate, and a plurality of key frame images are obtained.
  • the embodiment of the present application also provides an event detection device, including:
  • a first acquisition module configured to acquire video stream data in a preset scene area, and acquire at least one key frame image according to the video stream data
  • a detection module configured to detect whether there is a scene target corresponding to the event in the at least one key frame image
  • a second acquisition module configured to, if present, determine the actual position and/or stay duration of the scene target in the preset scene area
  • a judgment module configured to judge whether the event corresponding to the preset scene area occurs according to the actual position and/or the staying time.
  • the detection module is configured to:
  • Whether there is a scene object corresponding to the event in the at least one key frame image is detected by a scene object detection model.
  • the detection module is configured to:
  • the device further includes a model training module configured to:
  • annotation information includes classification information and location information of each sub-scene object that constitutes the scene object in the sample image, and the location of the scene objects in the sample image information;
  • label assignment is performed on the preset frame or position point corresponding to the sample image, and a sample label of the preset frame or position point corresponding to the sample image is obtained;
  • the iterative steps include: inputting the sample image into an initial scene target detection model to obtain an initial detection result; determining a loss value according to the initial detection result, label information and the sample label; value, the parameters of the initial scene target detection model are updated to obtain the updated initial scene target detection model; the scene target detection model obtained after the loss is converged is used as the scene target detection model.
  • the scene target includes multiple sub-targets; the detection module is configured to:
  • the sub-target category existing in the at least one key frame image is greater than a preset category threshold
  • the number of sub-targets existing in the at least one key frame image is greater than a preset number threshold
  • the target quantity of the specific sub-target category existing in the at least one key frame image is greater than a preset quantity threshold
  • the location information between the sub-objects existing in the at least one key frame image satisfies a preset location condition.
  • the second acquisition module is configured to be at least one of the following:
  • the at least one key frame image is detected by the scene target detection model, and the actual position of the scene target in the preset scene area is determined according to the position of the scene target included in the detected output result;
  • the at least one key frame image is detected by the scene target detection model, and according to the category of the sub-target, the confidence level of the sub-target and the position information of the sub-target included in the detected output result, it is determined that the scene target is in the preset Set the actual position in the scene area;
  • the at least one key frame image is detected by the scene target detection model, and the number of frames of key frame images in which the scene target continuously appears is used as the stay duration.
  • the judging module is configured to:
  • the actual position is located in the preset area range of the preset scene area and/or the stay duration is greater than the preconfigured target duration; then it is determined that the event occurs.
  • the judging module is configured to:
  • the number of frames in which the key frame images of the scene target appear continuously is used as the stay duration; if the stay duration is greater than the number of frames, it is determined that the stay duration is longer than a preconfigured target duration.
  • the first acquisition module is configured to:
  • the video stream data is decoded at a preset decoding frame rate, and a plurality of key frame images are obtained.
  • An embodiment of the present application provides an electronic device, including a processor and a memory, where the memory stores computer-readable instructions, and when the computer-readable instructions are executed by the processor, the operation is as provided in the first aspect above the steps in the method.
  • An embodiment of the present application provides a storage medium on which a computer program is stored, and when the computer program is executed by a processor, the steps in the method provided in the first aspect above are executed.
  • FIG. 1 is a flowchart of an event detection method provided by an embodiment of the present application.
  • FIG. 2 is a structural diagram of an event detection apparatus provided by an embodiment of the present application.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • FIG. 1 is a flowchart of an event detection method in an embodiment of the present application.
  • the full text takes urban management events as an example, but is not limited to urban management events, that is, the event monitoring method can also be applied to other occasions, which is not limited here.
  • the event detection method can be executed by electronic devices such as computers, servers, mobile phones, monitoring terminals, robots, etc.
  • the embodiments of the present application do not limit the electronic devices that execute the event detection method, and only need to have image processing capabilities and data processing capabilities. ; As shown in Figure 1, the method mainly includes the following steps S101 to S104:
  • S101 Acquire video stream data of a preset scene area, and acquire at least one key frame image according to the video stream data.
  • S104 Determine whether the event corresponding to the preset scene area occurs according to whether the scene target exists and according to the actual position and/or the stay duration.
  • the video stream data RTSP Real Time Streaming Protocol, real-time streaming protocol
  • the key frame images may be acquired by means of key frame decoding.
  • the video stream data shows a picture of a preset scene area (for example, the field of view of the camera that captures the video stream data).
  • a corresponding event can be preset, such as a city management event, so as to determine whether there is a scene target corresponding to the city management event in the key frame image decoded from the video stream data.
  • the city management event in the preset scene area corresponding to the camera can be set as illegal parking.
  • the city management event of the preset scene area corresponding to the camera can be set as drying along the street or operating on the road.
  • the city management events corresponding to different preset scene areas may be different. Therefore, if you want to detect whether there is a scene target corresponding to the preset scene area in the key frame image, you need to confirm the urban management event corresponding to the preset scene area, and then determine the corresponding scene based on the urban management event Target.
  • the urban management event corresponds to a scene target, and the types of scene targets corresponding to different types of urban management events may be different.
  • a scene target can contain a single target or a combination of multiple targets.
  • garbage dumping the scene target is a single target: garbage; for the urban management event of drying along the street, the scene target is a combination of multiple sub-targets: clothes and hangers.
  • Detecting whether there is a scene object can be implemented in various ways, for example, it can be implemented by a conventional image processing method, or it can be implemented by a scene detection algorithm or a scene object detection model in a preset manner.
  • whether there is a scene target can be directly determined from the output result of the scene target detection model.
  • the output result of the scene object detection model includes the scene object position and the scene object confidence level. When the scene object confidence level is greater than the preset confidence threshold, it is considered that the scene object exists.
  • the output result of the scene target detection model includes the position, category and confidence of the sub-target, and whether there is a scene target is determined according to the position, category and confidence of the sub-target.
  • step S103 after it is determined in step S102 that there is a scene target corresponding to the urban management event, the actual position and/or the stay time of the scene target can be determined, and whether there is a scene target can also be detected and determined at the same time. actual location.
  • the actual location of the scene object is determined while detecting whether the scene object exists.
  • the scene target detection model outputs the scene target position and the scene target confidence.
  • the scene target confidence is greater than the preset confidence threshold, it is considered that there is a scene target, and the scene target position output by the scene target detection model is the actual position of the scene target. It can be understood that when the confidence of the scene target is not greater than the preset confidence threshold, it is considered that there is no scene target, and the "scene target position" output by the target detection model is not used to represent the actual position of the scene target.
  • the actual location of the scene object may be determined after it is determined that the scene object exists.
  • the category, confidence level and location information of the sub-object are determined according to the output result of the scene object detection model, and then whether there is a scene object is determined according to the category, confidence level and location information of the sub-object.
  • the scene target position is determined according to the sub-target position. For example, the geometric center of the locations where the multiple sub-objects are located can be used as the location of the scene object.
  • the determination of the actual position and/or duration of stay of the scene target in the preset scene area in the above step S103 includes at least one of the following (1) to (3):
  • the at least one key frame image is detected by the scene target detection model, and the actual position of the scene target in the preset scene area is determined according to the position of the scene target included in the detected output result.
  • the category of the sub-target may also be referred to as the sub-target category; the confidence of the sub-target can be understood as the possibility of the existence of the detected sub-target, and the higher the confidence, the greater the possibility of the existence of the sub-target.
  • the actual position of the scene object determined according to the single key frame image only represents the actual position of the scene object in the key frame image. If a scene object appears in multiple key frame images, the actual position of the scene object in the preset scene area can be determined according to the respective actual positions of the scene object in the multiple key frame images in which it appears. The average value of the respective actual positions in the multiple key frame images in which it appears is taken as the actual position of the scene target in the preset scene area.
  • the duration of stay of the scene object in the preset scene area can be determined based on the number of frames of key frame images in which the scene object continuously appears. The larger the number of frames, the longer the stay of the scene object in the preset scene area. the longer.
  • this step S104 when judging whether an urban management event occurs, not only whether there is a scene target, but also whether the actual position of the scene target is within the preset area and/or whether the duration of the scene target has reached Target duration, it should be understood that whether the above-mentioned target exists and whether the location/duration requirements are met are two independent judgment factors.
  • the actual position of the scene target and/or whether the duration of stay meets the requirements so that the actual position and/or duration of stay can be set according to the actual needs of the user, thereby enabling the configuration of detection standards for urban management events. more flexible. It can be understood that, for a certain type of urban management event, regardless of the preset scene area, the corresponding scene target type is likely to be the same, but the difference in the preset scene area may cause the scene target to meet the location and / or different duration conditions.
  • the scene targets are all clothes + hangers, but for the preset scene area A, the upper left of the screen is the area that is not allowed to dry along the street, and for the preset scene area B, the upper right of the screen is the area For areas that are not allowed to dry along the street, at this time, it is necessary to separately set the positions in the urban management events corresponding to the preset scene areas A and B.
  • city A believes that parking in the illegal parking area for ten minutes is an illegal parking incident
  • city B believes that parking in the illegal parking area for 20 minutes is an illegal parking incident.
  • the duration in the event is set individually. Therefore, taking the location condition and/or the duration condition as the judgment condition independent of whether the scene target is not conducive to setting the criteria for the occurrence of urban management events according to actual needs.
  • the scene target detection algorithm can be reused between certain urban management events. For example, if the same scene target appears in the middle of the road, it corresponds to the management event of city A, and if it appears on the pedestrian street, it corresponds to the management event of city B. In this way, the city A management event and the city B management event can share the scene target detection algorithm, but the location/duration conditions corresponding to the city management event need to be set separately.
  • the existence of a scene target, the actual position of the scene target and/or whether the duration of the stay meets the requirements are separately judged, so that the scene target detection algorithm only judges whether there is a scene target, and does not pay attention to the actual position of the scene target and/or the duration of stay requirements. , so that the scene target detection algorithm can be trained and optimized according to the question of whether there is a scene target, which is beneficial to improve the accuracy of the scene detection algorithm.
  • whether there is a target and whether the actual location/duration of the scene target meets the location/duration requirements are taken as two independent factors for determining whether an urban management event occurs, and event detection is decoupled into target detection and location/duration.
  • the duration judgment is conducive to flexibly setting the criteria for detecting whether urban management events occur, and is conducive to improving the accuracy of the scene target detection algorithm, thereby improving the accuracy of event detection.
  • acquiring multiple key frame images according to the video stream data in this step S101 is specifically: decoding the video stream data at a preset decoding frame rate to obtain multiple key frame images.
  • the preset decoding frame rate can be set according to the actual situation, so as to reduce the pressure on the computing device while satisfying the computing requirements.
  • the preset decoding frame rate can be set to decode a key frame every 2 seconds to obtain a key frame image.
  • the multiple key frame images are sorted according to their acquisition time.
  • the preset decoding frame rate can be set according to a specific scene. For example, if it is on a street or road section with a high degree of prosperity, the preset decoding frame rate should be larger.
  • the preset decoding frame rate should be smaller.
  • the preset decoding frame rate may be larger, and if the scene target includes a slow-moving target such as a stopped vehicle, the preset decoding frame rate may be smaller.
  • the video stream data of different preset scene areas may be set with different event identifiers.
  • the event identifier is used to represent the urban management event corresponding to the video stream data.
  • the event identifier can be, for example, an identifier of an urban management event such as an illegal parking incident, an illegal stall setting event, an event of garbage dumping, etc. It can be understood that different types of The event identifiers corresponding to city management events are different.
  • the event identifier corresponding to the video stream data may be determined based on the identification information of the camera from which the video stream data originates. Of course, it can be understood that the video streams shot at different times in the same preset scene area may correspond to different city management events.
  • the city management event corresponding to the video stream data captured in the preset scene area between 12:00 a.m. and 5:00 p.m. is an illegal parking event
  • the city management event corresponding to the stream data is the illegal stall setting event.
  • it can also correspond to multiple urban management events at the same time.
  • the video stream data in a preset scene area A it is not only used for illegal parking judgment, but also for garbage dumping events. judgment. It can be understood that the types of the scene objects corresponding to the two urban management events and the location/duration requirements of the scene objects may be different.
  • this step S102 may specifically include: detecting whether there is a scene target corresponding to the event in the at least one key frame image by using a scene target detection model.
  • the event may be a city management event.
  • the scene target detection model may be directly determined by the output result of the scene target detection model whether the key frame image contains the scene target, It is also possible to further determine whether the key frame image contains the scene object on the basis of the output result of the scene object detection model.
  • the scene target detection model is pre-trained.
  • the scene target detection model may correspond to one or more specified event types, and can detect the corresponding event types.
  • the scene target detection model may be a general model for various urban management events, or may be a scene target detection model for specific types of urban management events.
  • step S102 may include the following sub-steps: S1021, acquiring a scene target detection model corresponding to the event; S1022, passing The scene object detection model detects whether a scene object corresponding to the event exists in the at least one key frame image.
  • different types of urban management events may correspond to a different type of scene target detection model
  • the database may be queried according to the urban management event identifier corresponding to the preset scene area to obtain the scene target corresponding to the urban management event Identify the model.
  • the illegal parking event corresponds to the A model
  • the illegal stall setting event corresponds to the B model
  • the garbage dumping event corresponds to the C model.
  • the different scene object detection models are configured to detect different scene objects.
  • the scene targets corresponding to illegal parking incidents are cars
  • the scene targets of illegal stall setting events are tricycles, fruits, and operators.
  • different scene object detection models can be used.
  • the illegal parking event in the foggy weather corresponds to the A1 model
  • the illegal parking event in the clear and fog-free weather corresponds to the A2 model.
  • the output result of the scene target model includes the confidence level of the scene target. If the confidence level is higher than a preset confidence threshold, it is considered that the scene target exists.
  • the scene target detection model corresponding to a certain city management event can be trained by the following methods: S11. Label the scene target in the sample image, and obtain label information, where the label information includes each component of the scene target in the sample image. Classification information and location information of the sub-scene objects, and location information of the scene objects in the sample image; S12, according to the label information, to the preset frame corresponding to the sample image (corresponding to the anchor frame-based target detection model ) or a position point (corresponding to a position detection model not based on an anchor frame), perform label assignment, and obtain a sample label of a preset frame or position point corresponding to the sample image; S13, perform an iterative step until the loss converges; the iterative step
  • the method includes: inputting the sample image into an initial scene target detection model to obtain an initial detection result; determining a loss value according to the initial detection result, label information and the sample label; and updating the parameters of the initial scene target detection model according to the loss value , obtain the updated initial scene target detection model; S14
  • a preset loss function can be used to determine the loss, and the parameters of the initial scene target detection model can be updated according to the loss and the back-propagation algorithm, until the initial scene target detection model can output the expected results. Update the parameters (ie, stop training), and the initial scene target detection model at this time is the scene target detection model obtained by training.
  • a sample set is obtained first, and the sample set includes a plurality of sample images.
  • the plurality of sample images may include sample images with scene objects corresponding to the city management event, and may also include sample images without scene objects corresponding to the city management event.
  • annotating the scene objects in the sample image it can be manually annotated, or it can be automatically annotated by a preset algorithm that can recognize the scene objects, or a combination of manual and automatic annotation can be used. Not only the position information of the scene object can be marked, but also the classification information and position information of each sub-target composing the scene object can be marked.
  • the labeling of the location information of the scene object may be automatically generated according to preset rules based on the labeling information (category information and location information of the sub-objects) of the sub-objects constituting the scene object.
  • the preset rule is a rule corresponding to the scene target, and is used to represent the conditions that the sub-targets in the sample image need to meet when it is considered that the scene target exists in the sample image.
  • the classification L1, L2, L3, L3 and location information of the four sub-objects A, B, C, and D have been marked in the sample image.
  • the preset rule is: the sample image contains sub-targets of type L1 and sub-targets of type L3, and the position overlap rate of sub-targets of type L1 and sub-targets of type L3 is greater than 30%, which satisfies the predetermined Set the rules to consider that there is a scene object in the sample image.
  • the preset rule is: the sample image contains three types of sub-targets: L1, L2, and L3.
  • the location information of the scene object can be generated based on the location information of the sub-objects (for example, the geometric center of each sub-object is taken as the center of the scene object).
  • the labeling information of the scene target can be automatically generated according to the preset rules.
  • annotation information corresponding to multiple scene targets can be generated according to the sub-target annotation information.
  • the sample image can be used as a sample image for training the scene object detection model of the scene object A, and can also be used as a sample image for training the scene object detection model of the scene object B.
  • the sub-targets in the sample image can be marked only once, and the sample images can be used as sample images of different scene target detection models respectively, and there is no need to use the sample images as sample images of different scene target detection models.
  • the scene target detection model is a general model for various urban management events
  • the scene target detection model can be used to detect sub-targets, and then determine whether there is a scene target and the location of the scene target according to the detection results.
  • the scene target detection model is a detection model capable of detecting various types of targets such as vehicles and objects.
  • some scene targets of urban management events include multiple sub-targets, and when the classification and location information of the multiple sub-targets conform to preset rules, it is considered that there are scene targets in the sample image.
  • step S102 may include the following sub-steps S1023-S1024:
  • the sub-target type, the number of sub-targets, and the sub-target location information existing in the at least one key frame image satisfy at least one of the following conditions, determine that there is a sub-target corresponding to the event in the at least one key frame image scene target: the type of sub-targets existing in the at least one key frame image is greater than the preset type threshold; the number of sub-targets existing in the at least one key frame image is greater than the preset number threshold; the at least one key frame image memory
  • the number of sub-targets in the specific sub-target category is greater than a preset number threshold; the position information between the sub-targets existing in the at least one key frame image satisfies a preset position condition.
  • the sub-target type, the number of sub-targets, and the sub-target location information may be obtained through detection by the scene target detection model, or obtained based on the detection result of the scene target detection model.
  • the scene object detection model detects at least one of the confidence level (used to indicate whether the sub-object exists), the type, and the position information of the sub-object in the key frame image.
  • the sub-target category refers to the corresponding category of each sub-target. For example, sub-goal A is the clothing class, and sub-goal B is the vehicle class.
  • the number of sub-objectives can be the total number of sub-objectives in the model, or the number of sub-objectives of various classes.
  • the sub-target position information is the position of the sub-target, which can be represented by the upper left and lower right coordinates of the position box.
  • whether there is a scene object in the key frame image may be determined according to information such as sub-object type, sub-object number, sub-object position information, etc. existing in at least one key frame image detected by the scene object detection model. For example, when there are sub-targets of the first type and sub-targets of the second type, and the positional overlap ratio of the two is greater than the overlap ratio threshold, it is considered that a scene target exists. Alternatively, when there are at least two sub-objectives of the first type and at most zero (ie, no sub-objectives of the second type) exist, a scene object is considered to exist.
  • the sub-objectives included in the scene target must at least include: tricycles or other open vehicles, placed on tricycles Or other commodities on open cars (for example, common small commodities such as fruits, snacks, toys or books), of course, it may also include an operator and a certain number of onlookers or buyers.
  • tricycles or other open vehicles placed on tricycles Or other commodities on open cars (for example, common small commodities such as fruits, snacks, toys or books)
  • other commodities on open cars for example, common small commodities such as fruits, snacks, toys or books
  • whether the actual position is located in the preset area range of the preset scene area is determined by the following methods (S1031 and S1032): S1031, judging the actual area corresponding to the actual position and the preset scene area of the preset scene area. Whether the overlap between the area ranges is greater than the preset threshold. The degree of overlap may be determined according to the IOU between the actual area and the preset area. S1032. If it is greater than the preset threshold, determine that the actual position is within a preset area range of the preset scene area. Wherein, in this step S1031, the setting of the preset threshold may be set based on a specific urban management event.
  • the staying duration is used as the staying duration. It is determined that the stay duration is greater than the preconfigured target duration in the following manner: S1033 , if the stay duration is greater than the preset number of frames, determine that the stay duration is greater than the preconfigured target duration. For example, if the scene target is detected in consecutive N key frame images, and the N key frames are decoded at a speed of 1 frame per 2 seconds, the dwell time is 2N seconds.
  • this step S104 may specifically include: if the actual position is located in a preset area of the preset scene area and/or the stay duration is greater than a preconfigured target duration, judging that the urban management event occurs .
  • the preset area range and the target duration are preset based on the type of the city management event.
  • the preset area ranges and target durations corresponding to different urban management events may be different.
  • when the actual location is within the preset area it can be judged that the urban management event occurs; or when the duration of stay is greater than the preconfigured target duration, it can be judged that the urban management event has occurred; or the actual location and the duration of stay must be at the same time Only when the corresponding conditions are met can the occurrence of urban management events be judged. For example, for urban management events such as illegal parking incidents, both location requirements and duration requirements must be met. For the retrograde event of the vehicle, as long as the location requirements are met.
  • the stay time of the corresponding scene target in the preset scene area is different.
  • the target time is generally set to be relatively short, for example, set to 3 seconds or 5 seconds.
  • the target time is set to be relatively long, for example, it can be set to 30 seconds or 1 minute, of course, it can also be other time.
  • the event detection method obtains the video stream data of the preset scene area, and obtains at least one key frame image according to the video stream data;
  • the scene target corresponding to the event determine the actual position and/or stay time of the scene target in the preset scene area; Whether the event corresponding to the preset scene area occurs, so as to realize the detection of the event, because the identification of the scene target combined with the spatiotemporal information of the scene target in the preset scene area is used to determine whether the event occurs, which reduces the misjudgment rate and can improve the detection accuracy.
  • FIG. 2 is a schematic structural diagram of an event detection apparatus according to an embodiment of the present application.
  • the event detection apparatus can also be implemented by the aforementioned electronic equipment, and the event detection apparatus includes: a first acquisition module 201 , a detection module 202 , a second acquisition module 203 and a judgment module 204 .
  • the first acquisition module 201 is configured to acquire video stream data of a preset scene area, and acquire at least one key frame image according to the video stream data.
  • the video streaming data RTSP Real Time Streaming Protocol, real-time streaming protocol
  • the key frame images may be acquired by means of key frame decoding.
  • the first obtaining module 201 is specifically configured to: decode the video stream data at a preset decoding frame rate, and obtain multiple key frame images.
  • the preset decoding frame rate may be set according to the actual situation. For example, the preset decoding frame rate may be set to decode a key frame every 2 seconds to obtain a key frame image.
  • the plurality of key frame images are sequenced according to the time axis.
  • the preset decoding frame rate can be set according to a specific scene. For example, if it is on a busy street or road section, the preset decoding frame rate should be larger. In a relatively deserted road section, the preset decoding frame rate should be smaller.
  • the video stream data of different preset scene areas are set with different event identifiers.
  • the event identifier is used to represent the urban management event corresponding to the video stream data.
  • the event identifier can be, for example, an identifier of an urban management event such as an illegal parking incident, an illegal stall setting event, or a garbage dumping incident. It can be understood that different urban management events
  • the event IDs corresponding to the events are different.
  • the identification information of the camera from which the video stream data is sourced can be obtained, and then the corresponding event identification can be obtained based on the identification information.
  • the city management events for the same preset scene area can be changed.
  • the city management event corresponding to the preset scene area is an illegal parking event
  • the city management event corresponding to the preset scene area is a violation of regulations Stall event.
  • Stall event it is not limited to this.
  • the same preset scene area can also correspond to multiple urban management events at the same time. Judgment of garbage dumping incidents.
  • the detection module 202 is configured to detect whether there is a scene object corresponding to the event in the at least one key frame image. Because the city management events corresponding to different preset scene areas are different. Therefore, if you want to detect whether there is a scene target corresponding to the preset scene area in the key frame image, you need to confirm the urban management event corresponding to the preset scene area, and then determine the corresponding scene based on the urban management event Target. Among them, when recognizing the scene target, either a conventional image recognition method or a pre-trained target detection model can be used for detection.
  • the detection module 202 is configured to: detect whether there is a scene object corresponding to the event in the at least one key frame image by using a scene object detection model.
  • the scene target detection model is obtained by pre-training, and the scene target detection model may be a general model for all urban management events, or may be a specially trained scene target detection model for a single type of urban management event.
  • the detection module 202 is configured to: acquire a scene target detection model corresponding to the event; and detect whether there is a scene target corresponding to the event in the at least one key frame image by using the scene target detection model.
  • different types of urban management events can correspond to a different type of scene target detection models.
  • the illegal parking event corresponds to the A model
  • the illegal stall setting event corresponds to the B model
  • the garbage dumping event corresponds to the C model.
  • the scene targets of illegal parking events are cars
  • the scene targets of illegal stall setting events are tricycles, fruits, and operators. Therefore, the database can be queried to obtain the corresponding scene target according to the event identifier, and then an appropriate scene target recognition model can be selected based on the scene target.
  • the illegal parking event in the foggy weather corresponds to the A1 model
  • the illegal parking event in the clear and fog-free weather corresponds to the A2 model
  • the location of each scene object can be given directly by the scene object model.
  • the scene target includes a plurality of sub-targets
  • the detection module 202 is configured to: detect the at least one key frame image through a scene target detection model, and obtain the sub-target type, sub-target type, sub-target existing in the at least one key frame image At least one of the number of targets and sub-target location information; if the sub-target type and/or the number of sub-targets existing in the at least one key frame image satisfies at least one of the following conditions, then the at least one key frame is determined There is a scene target corresponding to the event in the image: the sub-target type existing in the at least one key frame image is greater than a preset type threshold; the number of sub-targets existing in the at least one key frame image is greater than a preset number threshold; The number of targets of a specific sub-target type existing in the at least one key frame image is greater than a preset number threshold; the position information between the sub-targets existing in the at least one key frame image satisfies a preset position condition
  • the device further includes a model training module configured to: mark the scene objects in the sample image to obtain annotation information, and the annotation information includes the classification of each sub-scene object that constitutes the scene object in the sample image. information and position information, and the position information of the scene object in the sample image; according to the label information, perform label assignment on the preset frame or position point corresponding to the sample image, and obtain the preset corresponding to the sample image.
  • a model training module configured to: mark the scene objects in the sample image to obtain annotation information, and the annotation information includes the classification of each sub-scene object that constitutes the scene object in the sample image. information and position information, and the position information of the scene object in the sample image; according to the label information, perform label assignment on the preset frame or position point corresponding to the sample image, and obtain the preset corresponding to the sample image.
  • the sample label of the frame or the position point; the iterative step is performed until the loss converges; the iterative step includes: inputting the sample image into the initial scene target detection model to obtain the initial detection result; according to the initial detection result, the annotation information and the sample label, determine the loss value; update the parameters of the initial scene target detection model according to the loss value to obtain the updated initial scene target detection model; use the scene target detection model obtained after the loss converges as the scene target detection model.
  • the second obtaining module 203 is configured to: if it exists, determine the actual position and/or stay duration of the scene target in the preset scene area.
  • the position of the scene target can be output directly by the target detection model.
  • the position of the scene target should be calculated based on the position of each sub-target , for example, the geometric center of the area where the multiple sub-objects are located can be used as the location of the scene object.
  • the second obtaining module 203 is configured to be at least one of the following:
  • the at least one key frame image is detected by the scene target detection model, and the actual position of the scene target in the preset scene area is determined according to the position of the scene target included in the detected output result;
  • the at least one key frame image is detected by the scene target detection model, and according to the category of the sub-target, the confidence level of the sub-target and the position information of the sub-target included in the detected output result, it is determined that the scene target is in the preset Set the actual position in the scene area;
  • the at least one key frame image is detected by the scene target detection model, and the number of frames of key frame images in which the scene target continuously appears is used as the stay duration.
  • whether the actual position is located in the preset area range of the preset scene area is judged by the following method: judging whether the degree of overlap between the actual area corresponding to the actual position and the preset area range of the preset scene area is greater than A preset threshold; if it is greater than the preset threshold, it is determined that the actual position is within a preset area range of the preset scene area.
  • the setting of the preset threshold may be set based on a specific urban management event.
  • the number of frames of the key frame images of the scene object corresponding to the urban management event continuously appearing is used as the staying duration.
  • the following method is used: if the stay duration is greater than the preset number of frames, it is determined that the stay duration is greater than the preconfigured target duration.
  • the judging module 204 is configured to judge whether the urban management event corresponding to the preset scene area occurs according to the actual location and/or the staying time. The judgment is based on calculating whether the actual position of the scene target is within the preset range of the preset scene area, and judging the relationship between the duration of the scene target staying in the preset scene area and the target duration. The judging module 204 is configured to judge that the urban management event occurs if the actual position is within a preset area of the preset scene area and/or the stay duration is greater than a preconfigured target duration.
  • the judging module is configured to: judge whether the overlap between the actual area corresponding to the actual position and the preset area range of the preset scene area is greater than a preset threshold; if it is greater than the preset threshold , then it is judged that the actual position is within the preset area of the preset scene area; the number of frames of key frame images in which the scene target appears continuously is used as the duration of stay; if the duration of stay is greater than the number of frames , it is determined that the stay duration is greater than the preconfigured target duration.
  • the above content specifically describes the method of judging whether the actual position is within the preset area of the preset scene area and the method of judging whether the stay duration is greater than the preconfigured target duration.
  • the preset area range and the target duration are preset based on the type of the city management event. Different urban management events correspond to different preset area ranges and corresponding target durations.
  • the actual location is within the preset area, it can be judged that the urban management event occurs; or when the duration of stay is greater than the preconfigured target duration, it can be judged that the urban management event has occurred; or the actual location and the duration of stay must be at the same time Only when the corresponding conditions are met can the occurrence of urban management events be judged. For example, for urban management events such as illegal parking incidents, both location requirements and duration requirements must be met. For the retrograde event of the vehicle, as long as the location requirements are met.
  • the stay time of the corresponding scene target in the preset scene area is different.
  • the target time is generally set to be relatively short, for example, set to 3 seconds or 5 seconds.
  • the target time is set to be relatively long, for example, it can be set to 30 seconds or 1 minute, of course, it can also be other time.
  • the event detection device acquires the video stream data of the preset scene area, and acquires at least one key frame image according to the video stream data;
  • the scene target corresponding to the event determine the actual position and/or stay time of the scene target in the preset scene area; Whether the event corresponding to the preset scene area has occurred, so as to realize the detection of the event, because the identification of the scene target combined with the spatiotemporal information of the scene target in the preset scene area to determine whether the event occurred, the false positive rate is reduced, and the detection can be improved. 's accuracy.
  • FIG. 3 is a schematic structural diagram of an electronic device provided by an embodiment of the present application.
  • An embodiment of the present application provides an electronic device 3, including: a processor 301 and a memory 302, and the processor 31 and the memory 302 communicate with each other.
  • the bus 303 and/or other forms of connection mechanisms (not shown) are interconnected and communicate with each other, and the memory 302 stores a computer program executable by the processor 301.
  • the processor 301 executes the computer program to execute When executing the method in any optional implementation manner of the foregoing embodiment.
  • an embodiment of the present application provides a storage medium, and when the computer program is executed by a processor, the method in any optional implementation manner of the foregoing embodiment is executed.
  • the storage medium can be realized by any type of volatile or non-volatile storage device or their combination, such as static random access memory (Static Random Access Memory, SRAM for short), electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, referred to as EEPROM), Erasable Programmable Read Only Memory (Erasable Programmable Read Only Memory, referred to as EPROM), Programmable Read-Only Memory (Programmable Red-Only Memory, referred to as PROM), read-only Memory (Read-Only Memory, referred to as ROM), magnetic memory, flash memory, magnetic disk or optical disk.
  • SRAM static random access memory
  • EEPROM Electrically erasable programmable read-only memory
  • EPROM Erasable Programmable Read Only Memory
  • PROM Programmable Read-Only Memory
  • the disclosed apparatus and method may be implemented in other manners.
  • the apparatus embodiments described above are only illustrative.
  • the division of the units is only a logical function division.
  • multiple units or components may be combined or Can be integrated into another system, or some features can be ignored, or not implemented.
  • the shown or discussed mutual coupling or direct coupling or communication connection may be through some communication interfaces, indirect coupling or communication connection of devices or units, which may be in electrical, mechanical or other forms.
  • units described as separate components may or may not be physically separated, and components shown as units may or may not be physical units, that is, may be located in one place, or may be distributed to multiple network units. Some or all of the units may be selected according to actual needs to achieve the purpose of the solution in this embodiment.
  • the functional modules in the embodiments of the present application may be integrated together to form an independent part, or each module may exist independently, or two or more modules may be integrated to form an independent part.
  • whether an event occurs is judged by identifying the scene object in combination with the spatiotemporal information of the scene object in the preset scene area, which reduces the misjudgment rate and can improve the accuracy of detecting whether the event occurs.

Abstract

一种事件检测方法、装置、电子设备及存储介质。该事件检测方法,包括:获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标;确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。该方法通过场景目标的识别结合该场景目标在预设场景区域的时空信息来进行判断对应的事件是否发生,降低了误判率,可以提高检测的准确率。

Description

事件检测方法、装置、电子设备及存储介质
相关申请的交叉引用
本申请要求于2020年11月23日提交中国专利局的申请号为2020113252059、名称为“事件检测方法、装置、电子设备及存储介质”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
技术领域
本申请涉及计算机网络技术领域,具体而言,涉及一种事件检测方法、装置、电子设备及存储介质。
背景技术
当前随着计算机视觉识别技术的发展,人脸、人体、机动车、非机动车等固定特征目标的识别技术日臻成熟,且在安防领域应用越来越多,应用场景也趋于复杂化。而在城市管理领域,计算机视觉识别更多倾向于物品、行为及事件识别,即场景式检测识别。
在相关技术条件下,通过物品识别技术将识别到的符合特征的单个目标(人、车、物、动物等)或多个目标组合(目标1+目标2+目标3+……+目标n)作为检测结果进行呈现或推送。
但是,单纯基于物品识别很难准确判断事件是否发生。
发明内容
本申请实施例的目的在于提供一种事件检测方法、装置、电子设备及存储介质,可以提高检测事件是否发生的准确性。
本申请实施例提供了一种事件检测方法,包括:
获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;
检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标;
确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;
根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。
可选地,在本申请实施例所述的事件检测方法中,所述检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标,包括:
通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场 景目标。
可选地,在本申请实施例所述的事件检测方法中,所述通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标,包括:
获取与所述事件对应的场景目标检测模型;
通过所述场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
可选地,在本申请实施例所述的事件检测方法中,所述场景目标检测模型通过如下方法训练:
对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;
根据所述标注信息,对所述样本图像对应的预置框或位置点进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;
执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。
可选地,在本申请实施例所述的事件检测方法中,所述场景目标包括多个子目标;
通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标,包括:
通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类和/或子目标数量;
若所述至少一个关键帧图像内存在的子目标种类和/或子目标数量满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述城市管理事件对应的场景目标:
所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;
所述至少一个关键帧图像内存在的子目标数量大于预设数量阈值;
所述至少一个关键帧图像内存在的特定子目标种类的目标数量大于预设数量阈值。
可选地,在本申请实施例所述的事件检测方法中,所述确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长,包括以下至少一项:
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输 出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长。
可选地,在本申请实施例所述的事件检测方法中,所述根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述城市管理事件是否发生,包括:
若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长;则判断所述城市管理事件发生。
可选地,在本申请实施例所述的事件检测方法中,所述实际位置是否位于所述预设场景区域的预设区域范围通过如下方式判断:
判断所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值;
若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围。
可选地,在本申请实施例所述的事件检测方法中,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长;
所述停留时长是否大于预先配置的目标时长通过如下方式判断:若所述停留时长大于所述预设帧数,则确定所述停留时长大于预先配置的目标时长。
可选地,在本申请实施例所述的事件检测方法中,所述根据所述视频流数据获取至少一个关键帧图像,包括:
以预设解码帧率对所述视频流数据进行解码,并得到多个关键帧图像。
本申请实施例还提供了一种事件检测装置,包括:
第一获取模块,配置成获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;
检测模块,配置成检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标;
第二获取模块,配置成若存在,则确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;
判断模块,配置成根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。
可选地,所述检测模块配置成:
通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
可选地,所述检测模块配置成:
获取与所述事件对应的场景目标检测模型;
通过所述场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
可选地,所述装置还包括模型训练模块,配置成:
对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;
根据所述标注信息,对所述样本图像对应的预置框或位置点进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;
执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。
可选地,所述场景目标包括多个子目标;所述检测模块配置成:
通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息中的至少一种;
若所述至少一个关键帧图像内存在的子目标种类和/或子目标数量满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述事件对应的场景目标:
所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;
所述至少一个关键帧图像内存在的子目标数量大于预设数量阈值;
所述至少一个关键帧图像内存在的特定子目标种类的目标数量大于预设数量阈值;
所述至少一个关键帧图像内存在的子目标之间的位置信息满足预设位置条件。
可选地,所述第二获取模块,配置成以下至少一项:
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长。
可选地,所述判断模块配置成:
若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长;则判断所述事件发生。
可选地,所述判断模块配置成:
判断所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值;若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围;
将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长;若所述停留时长大于所述帧数,则确定所述停留时长大于预先配置的目标时长。
可选地,所述第一获取模块配置成:
以预设解码帧率对所述视频流数据进行解码,并得到多个关键帧图像。
本申请实施例提供一种电子设备,包括处理器以及存储器,所述存储器存储有计算机可读取指令,当所述计算机可读取指令由所述处理器执行时,运行如上述第一方面提供的所述方法中的步骤。
本申请实施例提供一种存储介质,其上存储有计算机程序,所述计算机程序被处理器执行时运行如上述第一方面提供的所述方法中的步骤。
本申请的其他特征和优点将在随后的说明书阐述,并且,部分地从说明书中变得显而易见,或者通过实施本申请实施例了解。本申请的目的和其他优点可通过在所写的说明书、权利要求书、以及附图中所特别指出的结构来实现和获得。
附图说明
为了更清楚地说明本申请实施例的技术方案,下面将对本申请实施例中所需要使用的附图作简单地介绍,应当理解,以下附图仅示出了本申请的某些实施例,因此不应被看作是对范围的限定,对于本领域普通技术人员来讲,在不付出创造性劳动的前提下,还可以根据这些附图获得其他相关的附图。
图1为本申请实施例提供的事件检测方法的一种流程图。
图2为本申请实施例提供的事件检测装置的一种结构图。
图3为本申请实施例提供的电子设备的结构示意图。
具体实施方式
下面将结合本申请实施例中附图,对本申请实施例中的技术方案进行清楚、完整地描述,显然,所描述的实施例仅仅是本申请一部分实施例,而不是全部的实施例。通常在此处附图中描述和示出的本申请实施例的组件可以以各种不同的配置来布置和设计。因此, 以下对在附图中提供的本申请的实施例的详细描述并非旨在限制要求保护的本申请的范围,而是仅仅表示本申请的选定实施例。基于本申请的实施例,本领域技术人员在没有做出创造性劳动的前提下所获得的所有其他实施例,都属于本申请保护的范围。
应注意到:相似的标号和字母在下面的附图中表示类似项,因此,一旦某一项在一个附图中被定义,则在随后的附图中不需要对其进行进一步定义和解释。同时,在本申请的描述中,术语“第一”、“第二”等仅用于区分描述,而不能理解为指示或暗示相对重要性。
请参照图1,图1是本申请实施例中的一种事件检测方法的流程图。全文以城市管理事件为例,但不以城市管理事件为限,也即,该事件监测方法也可以应用于其它场合,在此不进行限制。该事件检测方法可以由诸如计算机、服务器、手机、监控终端、机器人等电子设备执行,本申请实施例对执行事件检测方法的电子设备不进行限制,只需具有图像处理能力及数据处理能力即可;如图1所示,该方法主要包括以下步骤S101~步骤S104:
S101、获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像。
S102、检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
S103、确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长。
S104、根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。
其中,在该步骤S101中,该视频流数据RTSP(Real Time Streaming Protocol,实时流传输协议)可以是执行事件检测方法的电子设备从城市的某一监控区域的摄像机导入的实时的视频流数据。在本申请实施例中,可以采用关键帧解码的方式来获取该多个关键帧图像。视频流数据展示的是预设场景区域(例如是拍摄视频流数据的摄像机的视野)的画面。对于一个预设场景区域,可预先设置其对应的事件,该事件诸如可以是城市管理事件,从而对由视频流数据解码所得的关键帧图像中是否存在城市管理事件对应的场景目标进行判断。例如,对于辅路的摄像头,可设置该摄像头对应的预设场景区域的城市管理事件为违章停车。对于沿街商铺前的摄像头,可设置该摄像头对应的预设场景区域的城市管理事件为沿街晾晒或占道经营。
其中,在该步骤S102中,不同的预设场景区域对应的城市管理事件可以是不相同的。因此,如果要检测关键帧图像内是否存在有与该预设场景区域对应的场景目标,就需要先确认与该预设场景区域对应的城市管理事件,然后基于该城市管理事件来确定对应的场景目标。
城市管理事件对应有场景目标,不同类型的城市管理事件对应的场景目标的类型可以不同。场景目标可以包含单个目标,也可以是多个目标的组合。例如,对于垃圾堆放这一 城市管理事件,场景目标是单个目标:垃圾;对于沿街晾晒这一城市管理事件,场景目标是多个子目标的组合:衣服和衣架。
检测是否存在场景目标可以通过多种方式实施,例如可以通过常规图像处理方法实施,也可以由场景检测算法、场景目标检测模型按照预设方式实施。可选地,由场景目标检测模型的输出结果可直接确定出是否存在场景目标。例如,场景目标检测模型的输出结果包括场景目标位置和场景目标置信度,当场景目标置信度大于预设置信度阈值时,认为存在场景目标。可选地,基于场景目标检测模型的输出结果需进一步判断输出结果是否符合要求,才能确定出是否存在场景目标。例如,场景目标检测模型的输出结果包括子目标的位置、类别和子目标置信度,根据子目标的位置、类别、置信度确定是否存在场景目标。
在该步骤S103中,可以在步骤S102中确定存在与所述城市管理事件对应的场景目标后,再确定场景目标的实际位置和/或停留时长,也可同时检测是否存在场景目标并确定场景目标的实际位置。
可选地,在检测是否存在场景目标的同时确定场景目标的实际位置。场景目标检测模型输出场景目标位置、场景目标置信度,当场景目标置信度大于预设置信度阈值时,认为存在场景目标,场景目标检测模型输出的场景目标位置即为场景目标的实际位置。可以理解的是,当场景目标置信度不大于预设置信度阈值时,认为不存在场景目标,此时目标检测模型输出的“场景目标位置”不用来表征场景目标实际位置。
可选地,可在确定存在场景目标之后,再确定场景目标实际位置。例如,根据场景目标检测模型的输出结果确定子目标的类别、置信度和位置信息,再根据子目标的类别、置信度和位置信息确定是否存在场景目标。再在确定存在场景目标后,根据子目标位置确定场景目标位置。例如,可以将该多个子目标所在位置的几何中心作为场景目标的位置。
可选地,上述步骤S103中所述确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长,包括以下(1)至(3)中的至少一项:
(1)通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置。
(2)通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置。其中,子目标的类别也可称为子目标种类;子目标的置信度可理解为检测到的子目标存在的可能性,置信度越高,该子目标存在的可能性越大。
另外需要说明的是,根据单个关键帧图像确定出的场景目标的实际位置仅代表场景目标在该关键帧图像中的实际位置。若一个场景目标在多个关键帧图像中出现,可根据场景目标在其出现的多个关键帧图像中各自的实际位置确定场景目标在预设场景区域中的实际 位置,例如,将场景目标在其出现的多个关键帧图像中各自的实际位置的平均值作为场景目标在预设场景区域中的实际位置。
(3)通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长。也即,场景目标在预设场景区域中的停留时长可基于连续出现所述场景目标的关键帧图像的帧数确定,帧数越大,说明所述场景目标在预设场景区域中的停留时长越长。
在实际应用中,可以根据需求采用上述(1)~(3)中的一种或多种实现,在此不进行限制。
其中,在该步骤S104中,在判断城市管理事件是否发生时,不仅考虑到是否存在场景目标,而且还考虑到场景目标的实际位置是否位于预设区域范围和/或场景目标的停留时长是否达到目标时长,应当理解的是,上述是否存在目标和是否满足位置/时长要求是两个独立的判断因素。
一方面,将是否存在场景目标、场景目标实际位置和/或停留时长是否满足要求独立判断,能够使实际位置和/或停留时长可根据用户实际需要设置,从而使城市管理事件的检测标准的配置更为灵活。可以理解的是,对于某一类型的城市管理事件,不管预设场景区域如何,其对应的场景目标的类型大概率是相同的,但是预设场景区域的不同可能使场景目标需要满足的位置和/或时长条件不同。例如,对于沿街晾晒这一城市管理事件,其场景目标都是衣服+衣架,但是对于预设场景区域A,画面左上方是不允许沿街晾晒的区域,对于预设场景区域B,画面右上方是不允许沿街晾晒的区域,这时就需要对预设场景区域A和B对应的城市管理事件中的位置进行分别设置。再例如,A市认为在违章停车区域停车十分钟是违章停车事件,B市认为在违章停车区域停车二十分钟是违章停车事件,这时就需要对预设场景区域A和B对应的城市管理事件中的时长进行分别设置。因此,将位置条件和/或时长条件作为独立于是否场景目标之外的判断条件,有利于根据实际需要设置出现城市管理事件的标准。
另一方面,将是否存在场景目标、场景目标实际位置和/或停留时长是否满足要求独立判断,能够使某些城市管理事件之间复用场景目标检测算法。例如,同样的场景目标,出现在马路中央,则对应A城市管理事件,出现在步行街,则对应B城市管理事件。如此,可使A城市管理事件和B城市管理事件共享场景目标检测算法,只是城市管理事件对应的位置/时长条件需要单独设置。
再一方面,将是否存在场景目标、场景目标实际位置和/或停留时长是否满足要求分开判断,使场景目标检测算法只判断是否存在场景目标,而不关注场景目标实际位置和/或停留时长要求,从而能够就是否存在场景目标这一问题针对性的对场景目标检测算法进行训 练和优化,有利于提高场景检测算法的准确率。
在本实施例中,将是否存在目标、场景目标的实际位置/停留时长是否满足位置/时长要求作为城市管理事件是否发生的两个独立的判断因素,将事件检测解耦为目标检测和位置/时长判断,有利于对检测城市管理事件是否发生的标准进行灵活设置,有利于提高场景目标检测算法的准确率,从而提高事件检测的准确率。
可选地,该步骤S101中的根据所述视频流数据获取多个关键帧图像具体为:以预设解码帧率对所述视频流数据进行解码,并得到多个关键帧图像。其中,该预设解码帧率可以根据实际情况进行设定,以在满足计算要求的同时减轻计算设备压力。例如,预设解码帧率可以设定为每隔2秒解码一个关键帧得到一个关键帧图像。其中,该多个关键帧图像按照其采集时间进行排序。当然,可以理解地,该预设解码帧率可以根据具体的场景来进行设置。例如,如果是在繁华程度较高的街道或者路段,该预设解码帧率应该较大。在较为冷清的路段,该预设解码帧率应该较小。在例如,场景目标包括快速移动的目标如快速行驶的车辆,则该预设解码帧率可以较大,场景目标包括缓慢移动的目标如停止的车辆,则该预设解码帧率可以较小。
其中,不同的预设场景区域的视频流数据可设置有不同的事件标识。该事件标识用于表示该视频流数据所对应的城市管理事件,事件标识例如可以是违章停车事件、违规摆摊事件、垃圾乱堆放事件等城市管理事件的标识,可以理解的是,不同类型的城市管理事件对应的事件标识不同。可选地,可以基于该视频流数据的来源的摄像机的标识信息确定该视频流数据对应的事件标识。当然,可以理解地,同一预设场景区域在不同时间拍摄的视频流可对应不同的城市管理事件。例如,该预设场景区域在凌晨12点到下午5点之间拍摄的视频流数据对应的城市管理事件为违章停车事件,该预设场景区域在下午5点至凌晨12点之间拍摄的视频流数据对应的城市管理事件为违规摆摊事件。对于同一预设场景区域,还可以同时对应多个城市管理事件,例如,对于某个预设场景区域A内的视频流数据,不仅要用来进行违章停车判断,还用来进行垃圾乱堆放事件的判断。可以理解的是,这两个城市管理事件对应的场景目标的类型、场景目标的位置/时长要求可以不同。
可选地,该步骤S102可以具体为:通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。示例性地,该事件可以为城市管理事件。
其中,通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述城市管理事件对应的场景目标,可以是由场景目标检测模型的输出结果直接确定关键帧图像内是否包含场景目标,也可以是在场景目标检测模型的输出结果的基础上进一步确定关键帧图像内是否包含场景目标。该场景目标检测模型为预先训练得到。其中,场景目标检测模型可以是与一个或多个指定的事件类型相对应,能够检测出相应的事件类型。示例性地,该 场景目标检测模型可以为针对多种城市管理事件的通用模型,也可以是针对特定种类的城市管理事件的场景目标检测模型。
对于场景目标检测模型为针对特定种类的城市管理事件的场景目标检测模型的情形,可选地,步骤S102可以包括以下子步骤:S1021、获取与所述事件对应的场景目标检测模型;S1022、通过所述场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
其中,在该步骤S1021中,不同类型的城市管理事件可以对应一个不同类型的场景目标检测模型,可以根据预设场景区域对应的城市管理事件标识来查询数据库以获取该城市管理事件对应的场景目标识别模型。例如,违章停车事件对应A模型,违规摆摊事件对应B模型,垃圾乱堆放事件对应C模型。不同场景目标检测模型配置成检测不同的场景目标。例如,对应违章停车事件的场景目标为汽车,违规摆摊事件的场景目标为三轮车、水果以及经营者。可选地,即使对于同一类型的城市管理事件,在不同的天气情况下,也可以采用不同的场景目标检测模型。例如,浓雾天气的违章停车事件对应A1模型,晴朗无雾的天气的违章停车事件对应A2模型。
在该步骤S1022中,是否存在场景目标可以由该场景目标模型的输出结果直接确定。例如,场景目标模型的输出结果包括场景目标的置信度,若置信度高于预设置信度阈值,则认为存在场景目标。
其中,对应某一城市管理事件的场景目标检测模型可以通过如下方法训练:S11、对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;S12、根据所述标注信息,对所述样本图像对应的预置框(对应基于锚框的目标检测模型)或位置点(对应非基于锚框的位置检测模型)进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;S13、执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;S14、将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。在实际应用中,可以利用预设的损失函数来确定损失,并根据损失以及反向传播算法来对初始场景目标检测模型的参数进行更新,直至初始场景目标检测模型能够输出符合预期的结果时停止更新参数(也即,停止训练),此时的初始场景目标检测模型即作为训练得到的场景目标检测模型。
在该步骤S11中,先获取样本集,该样本集内包括多个样本图像。当然,该多个样本图像可以包括具有与城市管理事件对应的场景目标的样本图像,还可以包括不具有与该城 市管理事件对应的场景目标的样本图像。
对样本图像中的场景目标进行标注时,可以由人工标注,也可以采用可识别场景目标的预设算法自动标注,也可人工和自动结合标注。不仅可以标注场景目标的位置信息,还可以标注组成场景目标的各子目标的分类信息和位置信息。对场景目标的位置信息的标注可以基于对组成场景目标的子目标的标注信息(子目标的分类信息和位置信息)根据预设规则自动生成。其中预设规则是与场景目标对应的规则,用于表征当认为样本图像中存在场景目标时,样本图像中的子目标需要满足的条件。例如,已经在样本图像标注A、B、C、D4个子目标的分类L1、L2、L3、L3及位置信息。对于第一城市管理事件,预设规则为:样本图像中包含L1类型的子目标和L3类型的子目标,且L1类型的子目标和L3类型的子目标的位置重叠率大于30%,满足预设规则才认为样本图像中存在场景目标。对于第二城市管理事件,预设规则为:样本图像中包含L1、L2、L3 3个类型的子目标。当判断存在场景目标时,可基于子目标的位置信息生成场景目标的位置信息(例如将各子目标几何中心作为场景目标中心)。由此,可基于子目标的标注信息如分类信息和位置信息,根据预设规则自动生成场景目标的标注信息。基于此,在对一张样本图像进行了子目标的标注后,可根据子目标的标注信息生成对应多个场景目标的标注信息,这些标注信息可用于训练多个类型的场景目标对应的场景目标检测模型。例如,基于样本图像中子目标的标注信息,可确定样本图像中存在场景目标A并生成场景目标A的位置信息,还可确定样本图像中不存在场景目标B。样本图像既可作为训练场景目标A的场景目标检测模型的样本图像,又可作为训练场景目标B的场景目标检测模型的样本图像。如此,只对样本图像中的子目标进行一次标注即可将样本图像分别作为不同场景目标检测模型的样本图像,而无需在将该样本图像作为不同场景目标检测模型的样本图像时,对样本图像重新标注。
对于场景目标检测模型为针对多种城市管理事件的通用模型的情形,可以用场景目标检测模型进行子目标的检测,再根据检测结果确定是否存在场景目标以及场景目标的位置。可选地,场景目标检测模型是能够检测出载具、物品等多种类型目标的检测模型。
可选地,一些城市管理事件的场景目标包括多个子目标,当多个子目标的分类、位置信息符合预设规则时,则认为样本图像中存在场景目标。
对应地,步骤S102可以包括以下子步骤S1023~S1024:
S1023、通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息中的至少一种;
S1024、若所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述事件对应的场景目标:所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;所述至少一 个关键帧图像内存在的子目标数量大于预设数量阈值;所述至少一个关键帧图像内存在的特定子目标种类的子目标数量大于预设数量阈值;所述至少一个关键帧图像内存在的子目标之间的位置信息满足预设位置条件。
其中,在该步骤S1023中,子目标种类、子目标数量、子目标位置信息可以是通过场景目标检测模型检测得出的,或者基于场景目标检测模型的检测结果得出的。场景目标检测模型会检测出关键帧图像内的子目标的置信度(用于表征子目标是否存在)、种类、位置信息中至少一种。子目标种类是指各个子目标的所对应的种类。例如,子目标A是衣服类,子目标B是载具类。子目标数量可以是模型中子目标的总数量,或者各种类的子目标的数量。例如,衣服类子目标有0个,载具类子目标为3个,总子目标为5个。子目标位置信息为子目标所在的位置,可以由位置框的左上、右下坐标表示。
其中,在该步骤S1024中,可根据场景目标检测模型检测的至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息等信息确定关键帧图像中是否存在场景目标。例如,当存在第一类型的子目标、第二类型的子目标,且二者位置重叠率大于重叠率阈值时,认为存在场景目标。或者,当存在至少两个第一类型的子目标、至多0个(即不存在)第二类型的子目标时,认为存在场景目标。例如,对于违规摆摊事件这种类型的城市管理事件,若要想判别其为违规摆摊事件,该场景目标所要包括的子目标至少需要包括:三轮车或者其他敞开型的汽车、摆放在三轮车或者其他敞开型的汽车上的商品(例如,水果、小吃、玩具或者书籍等常见小商品),当然,还可以包括一个经营者以及一定数量的围观者或者购买者。当然,在实际的图像采集过程中,对于这种具有多个子目标的场景目标,其中个别子目标可能出现被遮挡的情况,因此,只需要识别出该多个第一场景目标中的预设数量或者预设种类的子目标,或者特定种类的子目标达到预设数量,即可判断为该关键帧图像里存在与该预设场景区域对应的城市管理事件对应的场景目标。
可选地,实际位置是否位于所述预设场景区域的预设区域范围通过如下方式(S1031和S1032)判断:S1031、判断所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值。其中,重叠度可根据实际区域与预设区域范围之间的IOU判断。S1032、若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围。其中,在该步骤S1031中,该预设阈值的设定可以基于具体的城市管理事件来设置。
其中,将连续出现与城市管理事件对应的场景目标的关键帧图像的帧数作为所述停留时长。判断留时长大于预先配置的目标时长通过如下方式:S1033、若所述停留时长大于所述预设帧数,则确定所述停留时长大于预先配置的目标时长。例如,在连续的N张关键帧图像中检测出了场景目标,N张关键帧是以2秒1帧的速度解码得到的,则停留时长为 2N秒。
可选地,该步骤S104可以具体包括:若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长,则判断所述城市管理事件发生。
其中,该预设区域范围以及目标时长是基于该城市管理事件的种类预先设置的。不同的城市管理事件对应的预设区域范围及目标时长可以不相同。在本实施例中,可以是当实际位置位于预设区域范围时,判断城市管理事件发生;或者当停留时长大于预先配置的目标时长时,判断城市管理事件发生;或者必须实际位置和停留时长同时满足对应条件,才能判断城市管理事件发生。例如,对于违规停车事件这类城市管理事件,其既要满足位置要求,也需要满足时长要求。而对于车辆逆行事件只要满足位置要求即可。对于不同的城市管理事件,其对应的场景目标在预设场景区域的停留时间是不同的。例如,对于违规停车事件而言,该目标时间一般设置比较短,例如设置为3秒或者5秒。对于违规摆摊事件而言,该目标时间设置较长,例如可以设置为30秒或者1分钟,当然也可以为其他时间。在同时满足场景目标存在条件,和实际位置、停留时长条件时,认为存在城市管理事件。
由上可知,本申请实施例提供的事件检测方法通过获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;检测所述至少一个关键帧图像内是否存在与事件对应的场景目标;确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生,从而实现事件的检测,由于通过场景目标的识别结合该场景目标在预设场景区域的时空信息来判断事件是否发生,降低了误判率,可以提高检测的准确率。
请参照图2,图2是本申请实施例中的一种事件检测装置的结构示意图。该事件检测装置也可以采用前述电子设备实现,该事件检测装置包括:第一获取模块201、检测模块202、第二获取模块203以及判断模块204。
其中,该第一获取模块201配置成获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像。该视频流数据RTSP(Real Time Streaming Protocol,实时流传输协议)从城市的某一监控区域的摄像机接入的实时的视频流数据。在本申请实施例中,可以采用关键帧解码的方式来获取该多个关键帧图像。
可选地,第一获取模块201在根据所述视频流数据获取多个关键帧图像时,具体配置成:以预设解码帧率对所述视频流数据进行解码,并得到多个关键帧图像。其中,该预设解码帧率可以根据实际情况进行设定,例如,预设解码帧率可以设定为每隔2秒解码一个关键帧得到一个关键帧图像。其中,该多个关键帧图像按照时间轴进行依次排序。当然,可以理解地,该预设解码帧率可以根据具体的场景来进行设置。例如,如果是在繁华程度 较高的街道或者路段,该预设解码帧率应该较大。在较为冷清的路段,该预设解码帧率应该较小。
其中,不同的预设场景区域的视频流数据设置有不同的事件标识。该事件标识用于表示该视频流数据所对应的城市管理事件,事件标识例如可以是违章停车事件、违规摆摊事件、垃圾乱堆放事件等城市管理事件的标识,可以理解的是,不同城市管理事件对应的事件标识不同。可选地,可以通过获取该视频流数据的来源的摄像机的标识信息,然后基于该标识信息得到对应的事件标识。当然,可以理解地,对于同一预设场景区域的城市管理事件可以发生改变。例如,在上午凌晨到下午5点之间,该预设场景区对应的城市管理事件为违章停车事件,在下午5点至凌晨12点之间,该预设场景区域对应的城市管理事件为违规摆摊事件。当然,其并不限于此。
可选地,对于同一预设场景区域,还可以同时对应多个城市管理事件,例如,对于某个预设场景区域A内的视频流数据,不仅要用来进行违章停车判断,还可以用来进行垃圾乱堆放事件的判断。
其中,该检测模块202配置成检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。由于不同的预设场景区域对应的城市管理事件是不相同的。因此,如果要检测关键帧图像内是否存在有与该预设场景区域对应的场景目标,就需要先确认与该预设场景区域对应的城市管理事件,然后基于该城市管理事件来确定对应的场景目标。其中,在识别场景目标时即可以采用常规的图像识别方法,也可以采用预先训练的目标检测模型来进行检测。
可选地,该检测模块202配置成:通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
其中,该场景目标检测模型为预先训练得到,该场景目标检测模型可以为针对所有城市管理事件的通用模型,也可以是针对单一种类的城市管理事件的专门训练的场景目标检测模型。
可选地,该检测模块202配置成:获取与所述事件对应的场景目标检测模型;通过所述场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。其中,不同类型的城市管理事件可以对应一个不同类型的场景目标检测模型。例如,违章停车事件对应A模型,违规摆摊事件对应B模型,垃圾乱堆放事件对应C模型。例如,对于违章停车事件的场景目标为汽车,违规摆摊事件的场景目标为三轮车、水果以及经营者。因此,可以根据该事件标识来查询数据库以获取对应的场景目标,然后基于该场景目标的来选取合适的场景目标识别模型。可选地,即使对于同一类型的城市管理事件,在不同的天气情况下,也可以采用不同的场景目标检测模型。例如,浓雾天气的违章停车事件对应 A1模型,晴朗无雾的天气的违章停车事件对应A2模型。每一场景目标的位置可以由该场景目标模型直接给出。
可选地,所述场景目标包括多个子目标;该检测模块202配置成:通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息中的至少一种;若所述至少一个关键帧图像内存在的子目标种类和/或子目标数量满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述事件对应的场景目标:所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;所述至少一个关键帧图像内存在的子目标数量大于预设数量阈值;所述至少一个关键帧图像内存在的特定子目标种类的目标数量大于预设数量阈值;所述至少一个关键帧图像内存在的子目标之间的位置信息满足预设位置条件。
可选地,所述装置还包括模型训练模块,配置成:对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;根据所述标注信息,对所述样本图像对应的预置框或位置点进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。
其中,该第二获取模块203配置成:若存在,则确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长。其中,对于场景目标为一个整体的情况,可以直接由目标检测模型来输出该场景目标的位置,对于场景目标包括多个子目标的情况,要基于各个子目标的位置来计算得到该场景目标的位置,例如,可以将该多个子目标所在区域的几何中心作为场景目标的位置。
可选地,所述第二获取模块203,配置成以下至少一项:
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置;
通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长。
其中,实际位置是否位于所述预设场景区域的预设区域范围通过如下方式判断:判断 所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值;若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围。其中,该预设阈值的设定可以基于具体的城市管理事件来设置。
其中,将连续出现与城市管理事件对于的场景目标的关键帧图像的帧数作为所述停留时长。判断留时长大于预先配置的目标时通过如下方式:若所述停留时长大于所述预设帧数,则确定所述停留时长大于预先配置的目标时长。其中,其中时长与帧数的换算关系如下:时长T=帧数N*每帧的持续时间t。
其中,该判断模块204配置成根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述城市管理事件是否发生。判断依据是计算场景目标的实际位置是否位于预设场景区域的预设范围,判断场景目标在预设场景区域停留的时长与目标时长的关系。该判断模块204配置成若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长,则判断所述城市管理事件发生。
可选地,所述判断模块配置成:判断所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值;若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围;将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长;若所述停留时长大于所述帧数,则确定所述停留时长大于预先配置的目标时长。上述内容具体阐述了判断实际位置是否位于所述预设场景区域的预设区域范围的方式以及判断所述停留时长是否大于预先配置的目标时长的方式。
其中,该预设区域范围以及目标时长是基于该城市管理事件的种类预先设置的。不同的城市管理事件对应的预设区域范围不同,对应的目标时长也不相同。在本实施例中,可以是当实际位置位于预设区域范围时,判断城市管理事件发生;或者当停留时长大于预先配置的目标时长时,判断城市管理事件发生;或者必须实际位置和停留时长同时满足对应条件,才能判断城市管理事件发生。例如,对于违规停车事件这类城市管理事件,其既要满足位置要求,也需要满足时长要求。而对于车辆逆行事件只要满足位置要求即可。对于不同的城市管理事件,其对应的场景目标在预设场景区域的停留时间是不同的。例如,对于违规停车事件而言,该目标时间一般设置比较短,例如设置为3秒或者5秒。对于违规摆摊事件而言,该目标时间设置较长,例如可以设置为30秒或者1分钟,当然也可以为其他时间。
由上可知,本申请实施例提供的事件检测装置通过获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;检测所述至少一个关键帧图像内是否存在与事件对应的场景目标;确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场 景区域对应的事件是否发生,从而实现事件的检测,由于通过场景目标的识别结合该场景目标在预设场景区域的时空信息来进行判断事件是否发生,降低了误判率,可以提高检测的准确率。
请参照图3,图3为本申请实施例提供的一种电子设备的结构示意图,本申请实施例提供一种电子设备3,包括:处理器301和存储器302,处理器31和存储器302通过通信总线303和/或其他形式的连接机构(未标出)互连并相互通讯,存储器302存储有处理器301可执行的计算机程序,当计算设备运行时,处理器301执行该计算机程序,以执行时执行上述实施例的任一可选的实现方式中的方法。
本申请实施例提供一种存储介质,所述计算机程序被处理器执行时,执行上述实施例的任一可选的实现方式中的方法。其中,存储介质可以由任何类型的易失性或非易失性存储设备或者它们的组合实现,如静态随机存取存储器(Static Random Access Memory,简称SRAM),电可擦除可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,简称EEPROM),可擦除可编程只读存储器(Erasable Programmable Read Only Memory,简称EPROM),可编程只读存储器(Programmable Red-Only Memory,简称PROM),只读存储器(Read-Only Memory,简称ROM),磁存储器,快闪存储器,磁盘或光盘。
在本申请所提供的实施例中,应该理解到,所揭露装置和方法,可以通过其它的方式实现。以上所描述的装置实施例仅仅是示意性的,例如,所述单元的划分,仅仅为一种逻辑功能划分,实际实现时可以有另外的划分方式,又例如,多个单元或组件可以结合或者可以集成到另一个系统,或一些特征可以忽略,或不执行。另一点,所显示或讨论的相互之间的耦合或直接耦合或通信连接可以是通过一些通信接口,装置或单元的间接耦合或通信连接,可以是电性,机械或其它的形式。
另外,作为分离部件说明的单元可以是或者也可以不是物理上分开的,作为单元显示的部件可以是或者也可以不是物理单元,即可以位于一个地方,或者也可以分布到多个网络单元上。可以根据实际的需要选择其中的部分或者全部单元来实现本实施例方案的目的。
再者,在本申请实施例中的各功能模块可以集成在一起形成一个独立的部分,也可以是各个模块单独存在,也可以两个或两个以上模块集成形成一个独立的部分。
在本文中,诸如第一和第二等之类的关系术语仅仅用来将一个实体或者操作与另一个实体或操作区分开来,而不一定要求或者暗示这些实体或操作之间存在任何这种实际的关系或者顺序。
以上所述仅为本申请的实施例而已,并不用于限制本申请的保护范围,对于本领域的技术人员来说,本申请可以有各种更改和变化。凡在本申请的精神和原则之内,所作的任何修改、等同替换、改进等,均应包含在本申请的保护范围之内。
工业实用性
本申请提出的技术方案中,通过场景目标的识别结合该场景目标在预设场景区域的时空信息来进行判断事件是否发生,降低了误判率,可以提高检测事件是否发生的准确率。

Claims (15)

  1. 一种事件检测方法,其特征在于,包括:
    获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;
    检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标;
    确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;
    根据是否存在所述场景目标以及根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。
  2. 根据权利要求1所述的事件检测方法,其特征在于,所述检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标,包括:
    通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
  3. 根据权利要求2所述的事件检测方法,其特征在于,所述场景目标检测模型通过如下方法训练:
    对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;
    根据所述标注信息,对所述样本图像对应的预置框或位置点进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;
    执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;
    将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。
  4. 根据权利要求2至3任一项所述的事件检测方法,其特征在于,所述场景目标包括多个子目标;
    通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标,包括:
    通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息中的至少一种;
    若所述至少一个关键帧图像内存在的子目标种类和/或子目标数量满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述事件对应的场景目标:
    所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;
    所述至少一个关键帧图像内存在的子目标数量大于预设数量阈值;
    所述至少一个关键帧图像内存在的特定子目标种类的目标数量大于预设数量阈值;
    所述至少一个关键帧图像内存在的子目标之间的位置信息满足预设位置条件。
  5. 根据权利要求2至4任一项所述的事件检测方法,其特征在于,所述确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长,包括以下至少一项:
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置;
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置;
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场景目标的关键帧图像的帧数作为所述停留时长。
  6. 根据权利要求1至5任一项所述的事件检测方法,其特征在于,所述根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生,包括:
    若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长;则判断所述事件发生。
  7. 根据权利要求6所述的事件检测方法,其特征在于,
    所述实际位置是否位于所述预设场景区域的预设区域范围通过如下方式判断:
    判断所述实际位置对应的实际区域与所述预设场景区域的预设区域范围之间的重叠度是否大于预设阈值;
    若大于所述预设阈值,则判断所述实际位置位于所述预设场景区域的预设区域范围。
  8. 一种事件检测装置,其特征在于,包括:
    第一获取模块,配置成获取预设场景区域的视频流数据,并根据所述视频流数据获取至少一个关键帧图像;
    检测模块,配置成检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标;
    第二获取模块,配置成若存在,则确定所述场景目标在所述预设场景区域中的实际位置和/或停留时长;
    判断模块,配置成根据所述实际位置和/或停留时长判断与所述预设场景区域对应的所述事件是否发生。
  9. 根据权利要求8所述的事件检测装置,其特征在于,所述检测模块配置成:
    通过场景目标检测模型检测所述至少一个关键帧图像内是否存在与所述事件对应的场景目标。
  10. 根据权利要求9所述的事件检测装置,其特征在于,所述装置还包括模型训练模块,配置成:
    对样本图像中的场景目标进行标注,得到标注信息,所述标注信息包括所述样本图像中组成场景目标的各子场景目标的分类信息和位置信息,以及所述样本图像中的场景目标的位置信息;
    根据所述标注信息,对所述样本图像对应的预置框或位置点进行标签分配,得到所述样本图像对应的预置框或位置点的样本标签;
    执行迭代步骤,直至损失收敛;所述迭代步骤包括:将所述样本图像输入初始场景目标检测模型,得到初始检测结果;根据初始检测结果、标注信息和所述样本标签,确定损失值;根据损失值,对初始场景目标检测模型的参数进行更新,得到更新后的初始场景目标检测模型;
    将损失收敛后得到的场景目标检测模型作为所述场景目标检测模型。
  11. 根据权利要求9至10任一项所述的事件检测装置,其特征在于,所述场景目标包括多个子目标;所述检测模块配置成:
    通过场景目标检测模型检测所述至少一个关键帧图像,得到所述至少一个关键帧图像内存在的子目标种类、子目标数量、子目标位置信息中的至少一种;
    若所述至少一个关键帧图像内存在的子目标种类和/或子目标数量满足以下条件中的至少一种,则确定所述至少一个关键帧图像内存在与所述事件对应的场景目标:
    所述至少一个关键帧图像内存在的子目标种类大于预设种类阈值;
    所述至少一个关键帧图像内存在的子目标数量大于预设数量阈值;
    所述至少一个关键帧图像内存在的特定子目标种类的目标数量大于预设数量阈值;
    所述至少一个关键帧图像内存在的子目标之间的位置信息满足预设位置条件。
  12. 根据权利要求9至11任一项所述的事件检测装置,其特征在于,所述第二获取模块,配置成以下至少一项:
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的场景目标的位置确定场景目标在所述预设场景区域中的实际位置;
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,根据检测得到的输出结果中包含的子目标的类别、子目标的置信度和子目标的位置信息,确定场景目标在所述预设场景区域中的实际位置;
    通过所述场景目标检测模型对所述至少一个关键帧图像进行检测,将连续出现所述场 景目标的关键帧图像的帧数作为所述停留时长。
  13. 根据权利要求8至12任一项所述的事件检测装置,其特征在于,所述判断模块配置成:
    若所述实际位置位于所述预设场景区域的预设区域范围和/或所述停留时长大于预先配置的目标时长;则判断所述事件发生。
  14. 一种电子设备,其特征在于,包括处理器以及存储器,所述存储器存储有计算机可读取指令,当所述计算机可读取指令由所述处理器执行时,运行如权利要求1-7任一项所述的方法。
  15. 一种存储介质,其上存储有计算机程序,其特征在于,所述计算机程序被处理器执行时运行如权利要求1至7任一项所述的方法。
PCT/CN2021/103735 2020-11-23 2021-06-30 事件检测方法、装置、电子设备及存储介质 WO2022105243A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN202011325205.9A CN112507813A (zh) 2020-11-23 2020-11-23 事件检测方法、装置、电子设备及存储介质
CN202011325205.9 2020-11-23

Publications (1)

Publication Number Publication Date
WO2022105243A1 true WO2022105243A1 (zh) 2022-05-27

Family

ID=74958202

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2021/103735 WO2022105243A1 (zh) 2020-11-23 2021-06-30 事件检测方法、装置、电子设备及存储介质

Country Status (2)

Country Link
CN (1) CN112507813A (zh)
WO (1) WO2022105243A1 (zh)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115186881A (zh) * 2022-06-27 2022-10-14 红豆电信有限公司 一种基于大数据的城市安全预测管理方法及系统
CN115272984A (zh) * 2022-09-29 2022-11-01 江西电信信息产业有限公司 占道经营检测方法、系统、计算机及可读存储介质
CN115359657A (zh) * 2022-08-16 2022-11-18 青岛海信网络科技股份有限公司 一种交通管理方法、装置、设备及介质
CN115797857A (zh) * 2022-11-07 2023-03-14 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN115858049A (zh) * 2023-03-04 2023-03-28 北京神州光大科技有限公司 Rpa流程组件化编排方法、装置、设备和介质
CN116451588A (zh) * 2023-04-25 2023-07-18 中航信移动科技有限公司 基于目标物预测轨迹确定提示信息的方法、介质及设备

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112507813A (zh) * 2020-11-23 2021-03-16 北京旷视科技有限公司 事件检测方法、装置、电子设备及存储介质
CN113052048A (zh) * 2021-03-18 2021-06-29 北京百度网讯科技有限公司 交通事件检测方法、装置、路侧设备以及云控平台
CN113052047B (zh) * 2021-03-18 2023-12-29 阿波罗智联(北京)科技有限公司 交通事件的检测方法、路侧设备、云控平台及系统
CN113139434A (zh) * 2021-03-29 2021-07-20 北京旷视科技有限公司 城市管理事件处理方法、装置、电子设备及可读存储介质
CN113205037B (zh) * 2021-04-28 2024-01-26 北京百度网讯科技有限公司 事件检测的方法、装置、电子设备以及可读存储介质
CN113095301B (zh) * 2021-05-21 2021-08-31 南京甄视智能科技有限公司 占道经营监测方法、系统与服务器
CN113344064A (zh) * 2021-05-31 2021-09-03 北京百度网讯科技有限公司 事件处理方法和装置
CN113469021A (zh) * 2021-06-29 2021-10-01 深圳市商汤科技有限公司 视频处理及装置、电子设备及计算机可读存储介质
CN113688717A (zh) * 2021-08-20 2021-11-23 云往(上海)智能科技有限公司 图像识别方法、装置以及电子设备
CN113688958A (zh) * 2021-10-26 2021-11-23 城云科技(中国)有限公司 适用于目标识别数据的过滤方法、装置及系统
CN116912758A (zh) * 2023-06-16 2023-10-20 北京安信创业信息科技发展有限公司 一种用于安全隐患监测的目标图像识别方法和装置

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784254A (zh) * 2019-01-07 2019-05-21 中兴飞流信息科技有限公司 一种车辆违规事件检测的方法、装置和电子设备
CN109800696A (zh) * 2019-01-09 2019-05-24 深圳中兴网信科技有限公司 目标车辆的监控方法、系统和计算机可读存储介质
CN111126252A (zh) * 2019-12-20 2020-05-08 浙江大华技术股份有限公司 摆摊行为检测方法以及相关装置
CN112507813A (zh) * 2020-11-23 2021-03-16 北京旷视科技有限公司 事件检测方法、装置、电子设备及存储介质

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105355053B (zh) * 2015-11-04 2018-08-14 公安部交通管理科学研究所 路边违停车辆自动检测系统
CN110223511A (zh) * 2019-04-29 2019-09-10 合刃科技(武汉)有限公司 一种汽车路边违停智能监测方法及系统
CN110867083B (zh) * 2019-11-20 2021-06-01 浙江宇视科技有限公司 车辆监控方法、装置、服务器和机器可读存储介质

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109784254A (zh) * 2019-01-07 2019-05-21 中兴飞流信息科技有限公司 一种车辆违规事件检测的方法、装置和电子设备
CN109800696A (zh) * 2019-01-09 2019-05-24 深圳中兴网信科技有限公司 目标车辆的监控方法、系统和计算机可读存储介质
CN111126252A (zh) * 2019-12-20 2020-05-08 浙江大华技术股份有限公司 摆摊行为检测方法以及相关装置
CN112507813A (zh) * 2020-11-23 2021-03-16 北京旷视科技有限公司 事件检测方法、装置、电子设备及存储介质

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115186881A (zh) * 2022-06-27 2022-10-14 红豆电信有限公司 一种基于大数据的城市安全预测管理方法及系统
CN115359657A (zh) * 2022-08-16 2022-11-18 青岛海信网络科技股份有限公司 一种交通管理方法、装置、设备及介质
CN115359657B (zh) * 2022-08-16 2023-10-13 青岛海信网络科技股份有限公司 一种交通管理方法、装置、设备及介质
CN115272984A (zh) * 2022-09-29 2022-11-01 江西电信信息产业有限公司 占道经营检测方法、系统、计算机及可读存储介质
CN115797857A (zh) * 2022-11-07 2023-03-14 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN115797857B (zh) * 2022-11-07 2023-08-01 北京声迅电子股份有限公司 一种出行事件确定方法、安检方法以及事件管理方法
CN115858049A (zh) * 2023-03-04 2023-03-28 北京神州光大科技有限公司 Rpa流程组件化编排方法、装置、设备和介质
CN115858049B (zh) * 2023-03-04 2023-05-12 北京神州光大科技有限公司 Rpa流程组件化编排方法、装置、设备和介质
CN116451588A (zh) * 2023-04-25 2023-07-18 中航信移动科技有限公司 基于目标物预测轨迹确定提示信息的方法、介质及设备
CN116451588B (zh) * 2023-04-25 2024-02-27 中航信移动科技有限公司 基于目标物预测轨迹确定提示信息的方法、介质及设备

Also Published As

Publication number Publication date
CN112507813A (zh) 2021-03-16

Similar Documents

Publication Publication Date Title
WO2022105243A1 (zh) 事件检测方法、装置、电子设备及存储介质
CN108053427B (zh) 一种基于KCF与Kalman的改进型多目标跟踪方法、系统及装置
US10706330B2 (en) Methods and systems for accurately recognizing vehicle license plates
US9460361B2 (en) Foreground analysis based on tracking information
US11455805B2 (en) Method and apparatus for detecting parking space usage condition, electronic device, and storage medium
WO2018223955A1 (zh) 目标监控方法、目标监控装置、摄像机及计算机可读介质
CN108694399B (zh) 车牌识别方法、装置及系统
WO2016004673A1 (zh) 一种基于云服务的智能目标识别装置、系统及方法
Zabłocki et al. Intelligent video surveillance systems for public spaces–a survey
CN109360362A (zh) 一种铁路视频监控识别方法、系统和计算机可读介质
CN109377694B (zh) 社区车辆的监控方法及系统
EP4035070B1 (en) Method and server for facilitating improved training of a supervised machine learning process
US10445885B1 (en) Methods and systems for tracking objects in videos and images using a cost matrix
US20180098034A1 (en) Method of Data Exchange between IP Video Camera and Server
CN110490171B (zh) 一种危险姿态识别方法、装置、计算机设备及存储介质
CN112150514A (zh) 视频的行人轨迹追踪方法、装置、设备及存储介质
Yang et al. Traffic flow estimation and vehicle‐type classification using vision‐based spatial–temporal profile analysis
US20230394686A1 (en) Object Identification
Winter et al. Computational intelligence for analysis of traffic data
CN112686136B (zh) 一种对象检测方法、装置及系统
Seema et al. Deep learning models for analysis of traffic and crowd management from surveillance videos
US20220180102A1 (en) Reducing false negatives and finding new classes in object detectors
TWI706381B (zh) 影像物件偵測方法及系統
CN110581979B (zh) 一种图像采集系统、方法及装置
CN112597924A (zh) 电动自行车轨迹追踪方法、摄像机装置和服务器

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21893395

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 25.09.2023)