WO2021100919A1 - Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement - Google Patents

Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement Download PDF

Info

Publication number
WO2021100919A1
WO2021100919A1 PCT/KR2019/016068 KR2019016068W WO2021100919A1 WO 2021100919 A1 WO2021100919 A1 WO 2021100919A1 KR 2019016068 W KR2019016068 W KR 2019016068W WO 2021100919 A1 WO2021100919 A1 WO 2021100919A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
behavior
actions
sequence
abnormal behavior
Prior art date
Application number
PCT/KR2019/016068
Other languages
English (en)
Korean (ko)
Inventor
홍석환
Original Assignee
주식회사 두다지
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 주식회사 두다지 filed Critical 주식회사 두다지
Publication of WO2021100919A1 publication Critical patent/WO2021100919A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • G06F18/2413Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches based on distances to training or reference patterns
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/12Bounding box

Definitions

  • the present invention relates to a method, program, and system for determining abnormal behavior based on an action sequence, and more particularly, a method and program for determining whether an abnormal behavior corresponds to an abnormal behavior by recognizing an object's behavior and analyzing it based on an action sequence. And a system.
  • CCTV closed-circuit television
  • CCTV is a system that transmits an image to a specific recipient for a specific purpose, and is called a closed circuit television.
  • CCTV is a system that transmits images only to specific recipients using wired or special wireless transmission paths so that the general public cannot receive them arbitrarily depending on the purpose.
  • CCTV is used for various purposes such as industrial, educational, medical, traffic control surveillance, disaster prevention, and image information transmission within the company.
  • CCTV is composed of a camera and a digital video recorder (DVR) that plays a role of recording the video captured by the camera.
  • DVR digital video recorder
  • CCTVs have been utilized by grafting technology to recognize and track objects through system linkage, beyond the role of simply capturing and transmitting images.
  • Artificial intelligence technology is a technology that realizes human learning ability, reasoning ability, perceptual ability, and understanding of natural language through computer programs, and it is a field of technology that enables computers to imitate human intelligent behavior. .
  • Machine learning is a field of artificial intelligence that has evolved from the study of pattern recognition and computer learning theory.
  • Machine learning is a technology that studies and builds a system that learns, performs prediction, and improves its own performance based on empirical data and algorithms for it.
  • deep learning is a field of machine learning. Deep learning differs from general machine learning in that it can learn by itself and predict future situations even if it omits the human teaching process.
  • the problem to be solved by the present invention is to provide a method, a program, and a system for determining abnormal behavior based on an action sequence capable of recognizing an action while tracking each object when a plurality of objects exist.
  • a problem to be solved by the present invention is to provide a method, program, and system for determining abnormal behavior based on not only an image but also an action sequence in order to improve recognition rate and accuracy.
  • a problem to be solved by the present invention is to provide a method, program, and system for determining whether an abnormal behavior is based on an action sequence capable of understanding the intention of the object by analyzing the behavior of the recognized object.
  • the problem to be solved by the present invention is to provide a system with improved processing speed by using a storage space like a cache.
  • a method of determining whether an abnormal behavior based on an action sequence according to an aspect of the present invention for solving the above-described problems is performed by a computer, receiving image data, and recognizing one or more objects from the image data. , Recognizing a plurality of unit actions for each of the recognized objects, classifying the plurality of unit actions into normal or abnormal actions, and sequentially sequencing the plurality of unit actions to obtain sequence data. And determining whether or not an abnormal behavior is based on a ratio of the unit behavior classified as an abnormal behavior in the sequence data.
  • the method of determining whether or not abnormal behavior based on the behavior sequence is to determine whether the abnormal behavior is performed, wherein the abnormal behavior is determined if the value of the ratio occupied by one or more unit behaviors classified as abnormal behavior is greater than or equal to a preset value. It may include steps.
  • the method of determining whether abnormal behavior based on the behavior sequence further includes classifying an object category as a background or a person for each recognized object, and the step of recognizing the plurality of unit behaviors includes the object category May be recognizing a plurality of unit actions for an object that is a person.
  • the method of determining whether or not abnormal behavior based on the behavior sequence includes the steps of recognizing the object, dividing an image frame into grids of the same size, and at least one bounding box including an object included in the image frame. And extracting a grid including a center point of the bounding box.
  • the method of determining whether an abnormal behavior is based on the behavior sequence may include recognizing the object, analyzing a similarity between the object recognized in the first image frame and the object recognized in the second image frame. have.
  • the method of determining whether an abnormal behavior based on the behavior sequence includes the step of recognizing the behavior, extracting an image included in an image frame, and extracting a behavior vector from the image frame and an image frame adjacent to the image frame. And recognizing a behavior based on the image and the behavior vector.
  • the method of determining whether or not an abnormal behavior based on the behavior sequence includes the step of extracting the behavior vector, applying an optical filter to an image included in an image frame, and vectorizing the behavior of the object from the image to which the optical filter is applied. It may include the step of.
  • a program for determining abnormal behavior based on an action sequence according to another aspect of the present invention for solving the above-described problem is combined with hardware to execute a method of determining abnormal behavior based on the above-mentioned behavior sequence. Is saved.
  • a system for determining abnormal behavior based on an action sequence for solving the above-described problem includes an input unit receiving image data, an object recognition unit recognizing one or more objects from the image data, and the recognition.
  • a behavior recognition unit that recognizes a plurality of unit behaviors for each of the objects, a classification unit that classifies the plurality of unit behaviors into normal or abnormal behaviors, and sequence data by sequentially sequencing the plurality of unit behaviors. It may include a generating unit to generate and a determination unit to determine whether or not the abnormal behavior based on a ratio of the unit behavior classified as an abnormal behavior in the sequence data.
  • the present invention even when a plurality of objects appear in an image, it is possible to distinguish and recognize and track each object, and to recognize an action for each object to determine whether or not an abnormal behavior occurs.
  • an abnormal symptom is not determined only by whether or not an object has performed a specific abnormal behavior, but by analyzing the behavior of the object in a time series to understand the intention of a specific unit behavior in a context, it is more precise. You can judge whether you are acting or not.
  • the storage space as a cache, it is possible not to transmit/receive all necessary data such as image data through a network, thereby improving the processing speed.
  • FIG. 1 is a block diagram of a system for determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • FIG. 2 is a flowchart schematically illustrating a method of determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • FIG. 3 is a detailed flowchart illustrating a step of recognizing an object according to an embodiment of the present invention.
  • FIG. 4 is an exemplary view showing a state of recognizing a plurality of objects according to an embodiment of the present invention.
  • FIG. 5 is a flowchart further including analyzing the similarity of the object in the step of recognizing an object according to an embodiment of the present invention.
  • FIG. 6 is a detailed flowchart illustrating a step of recognizing a unit action according to an embodiment of the present invention.
  • FIG. 7 is an exemplary view showing an image and a motion vector extracted from an image frame according to an embodiment of the present invention.
  • FIG. 8 is an exemplary diagram illustrating a pre-stored database in which behavior categories are matched for each unit behavior according to an embodiment of the present invention.
  • FIG. 10 is a flowchart schematically illustrating a process of determining whether an abnormal behavior has occurred according to an embodiment of the present invention.
  • 11 is an exemplary view showing a state of determining whether an abnormal behavior is based on a ratio occupied by a unit behavior classified as an abnormal behavior according to an embodiment of the present invention.
  • FIG. 12 is a flowchart illustrating a process of determining whether or not an abnormal behavior is based on a ratio occupied by a unit behavior classified as an abnormal behavior according to an embodiment of the present invention.
  • 13 is a flowchart further including the step of classifying an object category in a method for determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • unit action refers to an action that constitutes a movement of an object.
  • normal behavior refers to a unit behavior that is generally performed as a daily behavior that does not deviate from the standard defined by the group to which an individual belongs.
  • abnormal behavior refers to a behavior that is not a normal behavior and refers to a unit behavior that is not generally performed.
  • abnormal behavior refers to an action to be finally detected, such as a criminal activity.
  • FIG. 1 is a block diagram of a system for determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • a system 1000 for determining abnormal behavior based on an action sequence includes an input unit 10, an object recognition unit 20, a behavior recognition unit 30, and a classification unit ( 40), a generation unit 50 and a determination unit 60 are included.
  • the input unit 10 serves to receive image data.
  • Image data is data about an image captured through a photographing device installed in the field.
  • the image data includes video streaming data for reproducing the captured image in real time and data on the stored image.
  • the photographing device installed in the field is an arbitrary device including a camera capable of taking a video of the field.
  • the photographing device may be in the form of a CCTV, and if necessary, a configuration for performing a sensor or an artificial intelligence function may be additionally provided.
  • the input unit 10 may receive image data by being integrally configured with a photographing device or by receiving image data photographed by the photographing device through wired or wireless communication.
  • image data may be input and transmitted using a storage cash technology.
  • Storage cache technology is a technology that can use storage space like a cache. That is, in the process of transmitting and receiving necessary data, the data stored in the storage space serves as a cache without the need to transmit and receive the entire data through the network every time, and some or all of the updated data is matched with the intermediate stored data to be transmitted/received. It is a technology that allows you to do it. By applying the storage cache technology, it is possible to process high-capacity data (for example, video data such as video streaming) faster. Through this, the task of recognizing the object and its behavior in real time and determining whether it corresponds to an abnormal behavior can be seamlessly processed.
  • the object recognition unit 20 recognizes and tracks (tracking) one or more objects from the received image data.
  • the object recognition unit 20 may distinguish and recognize each object. That is, even if the number of objects included in the image frame is different, the object recognition unit 20 may recognize each object of interest in the image frame by distinguishing it from the background.
  • the object recognition unit 20 includes an object recognition model.
  • the "object-recognition model” is a model that recognizes an object by analyzing image data using a computer, and may include an algorithm or data for efficiently searching for an object or utilizing machine learning (or deep learning). have.
  • the object recognition model of the object recognition unit 20 may include algorithms of two-stage methods or single-stage methods.
  • the two-stage method is a method of applying a region proposal netwrok (RPN) based on deep learning or computer vision technology that selectively searches for regions that are likely to contain objects.
  • RPN region proposal netwrok
  • the object recognition model is an example of a two-stage algorithm, and may include algorithms such as Region based CNN (R-CNN), Faster R-CNN, and Region-based Fully Convolutional Networks (R-FCN).
  • the single-step method is a method of searching for an object based on a predetermined position and size.
  • the object recognition model is an example of a single-step algorithm, and may include an algorithm such as a You only look once (YOLO) algorithm, a Single Shot Mutibox Detector (SSD), and RetinaNet.
  • YOLO You only look once
  • SSD Single Shot Mutibox Detector
  • RetinaNet RetinaNet
  • the behavior recognition unit 30 plays a role of recognizing a plurality of unit actions for each object recognized by the object recognition unit 20.
  • the behavior recognition unit 30 includes a behavior recognition model.
  • the "behavior recognition model” is a model that recognizes the behavior of an object by analyzing image data using a computer, and the behavior recognition model recognizes the behavior of an object of interest that has passed through the object recognition model.
  • the behavior recognition model may include an algorithm for improving recognition rate and accuracy.
  • the behavior recognition model includes an algorithm of the two-stream model method.
  • the two-stream model is a model that distinguishes image data into spatial and temporal streams, and extracts and combines images (31) and motion vectors (32) from each of the spatial and temporal streams to recognize behavior.
  • a 3D CNN method may be applied to improve the recognition rate of the two-stream model.
  • 3D CNN is a method of inputting input values in 3D instead of 2D, and when 3D CNN is applied, a time axis can be applied, thereby improving the recognition rate.
  • the classification unit 40 serves to classify an object category or an action category.
  • Object category is a category for classifying the properties of an object.
  • the object category may include people, animals, backgrounds, and the like, but is not limited thereto, and is an arbitrary category capable of classifying the properties or types of objects.
  • the classification unit 40 determines and designates an object category for the recognized objects.
  • the object category is used by the behavior recognition unit 30 to determine an object to recognize the behavior.
  • the behavior recognition unit 30 may perform a behavior recognition task only on an object whose object category is classified as a person according to its purpose. Through this, it is possible to improve the processing speed and performance of the task by excluding the action recognition task for unnecessary objects.
  • the "behavior category” is a category for classifying the types of behavior of an object.
  • the behavioral category may include normal behavior and abnormal behavior.
  • the classification unit 40 compares a plurality of unit actions recognized by the behavior recognition unit 30 with a pre-stored database (refer to FIG. 8) to match the behavior category.
  • the generation unit 50 plays a role of generating sequence data by sequentially sequencing a plurality of unit actions.
  • Sequence data is data obtained by sequentially sequencing recognized unit actions. That is, sequence data is data in which unit actions recognized from an object are sequentially arranged in order. The sequence data includes data on the number of times each unit action is detected by dividing the unit actions recognized from the object for each predetermined unit action (see FIG. 11). The sequence data is used to determine whether an abnormal behavior has occurred, and a specific method will be described later.
  • the determination unit 60 serves to determine whether an action (or behavior flow) of an object recognized based on the sequence data generated by the generation unit 50 corresponds to an abnormal behavior.
  • the determination unit 60 may include a sequence classification model.
  • the "sequence classification model” is a model that analyzes sequence data generated by the generation unit 50 to determine whether the motion of an object corresponds to an abnormal behavior.
  • the sequence classification model determines whether an abnormal behavior is based on a ratio of the number of unit behaviors in which the behavior category is classified as abnormal behavior in sequence data. A detailed description of this will be described later with reference to FIG. 11.
  • the sequence classification model compares the sequence data generated by the generation unit 50 with sequence data that is pre-stored or learned through machine learning to determine whether there is an abnormal behavior. That is, the sequence data stored in advance or learned through machine learning is classified as abnormal behavior or normal behavior for the behavior sequence according to the number of cases, and the sequence data generated by the generation unit 50 is classified as abnormal behavior. If it matches the set action sequence, it is determined as an abnormal action, and when the sequence data generated by the generation unit 50 matches the action sequence classified as a normal action, it is judged as a normal action.
  • the sequence classification model calculates a first score as a result of comparing the sequence data generated by the generation unit 50 with the sequence data that is pre-stored or learned through machine learning, and calculates the behavior category from the sequence data.
  • a secondary score may be calculated based on the ratio of the number of unit actions classified as abnormal behaviors to the total number of unit actions, and the abnormal behavior can be determined by combining the first score and the second score.
  • FIG. 2 is a flowchart schematically illustrating a method of determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • a method of determining whether an abnormal behavior is based on an action sequence includes receiving image data (S100), recognizing one or more objects from image data (S200), Recognizing a plurality of unit actions for each recognized object (S300), classifying an action category for a plurality of unit actions (S400), generating sequence data based on the plurality of unit actions (S500) And determining whether the abnormal behavior is based on the sequence data (S600).
  • Step S100 is a step in which the input unit 10 receives image data.
  • Image data may be streamed in real time or received in a stored form.
  • Step S200 is a step in which the object recognition unit 20 recognizes one or more objects from image data input through the input unit 10.
  • Step S300 is a step in which the behavior recognition unit 30 recognizes a plurality of unit actions for each object recognized by the object recognition unit 20.
  • step S400 the classification unit 40 classifies a behavior category as normal behavior or abnormal behavior with respect to a plurality of unit behaviors recognized by the behavior recognition unit 30.
  • Step S500 is a step in which the generation unit 50 sequentially sequence a plurality of unit actions to generate sequence data.
  • step S600 the determination unit 60 determines whether an abnormal behavior occurs based on the sequence data generated by the generation unit 50.
  • FIG. 3 is a detailed flowchart illustrating a step of recognizing an object according to an embodiment of the present invention.
  • a YOLO algorithm may be applied, and the step of forming one or more bounding boxes including objects included in an image frame (S210), It may include dividing the image frame into grids having the same size (S220) and extracting a grid including a center point of the bounding box (S230).
  • Step S210 is a step of dividing the image frame into a plurality of grids having the same size.
  • Step S220 is a step of forming a bounding box having a size surrounding the object image included in the image frame.
  • a bounding box is formed for each object.
  • the bounding box may be formed by predicting the number of bounding boxes (anchor boxes) required for each object based on a preset shape centered on the center of the grid for each grid.
  • the number of bounding boxes can be determined from data by the K-means algorithm.
  • Step S230 is a step of extracting a grid including a center point of the bounding box, and determining a grid for identifying the recognized object.
  • each object recognized through the step S230 is matched to one grid smaller in size than the bounding box and can be identified, each of the plurality of objects can be more accurately distinguished and recognized. That is, when there are a plurality of objects, even if a boundary box surrounding each object overlaps, the center grid matched to each object does not overlap, so the recognition rate is improved.
  • FIG. 4 is an exemplary view showing a state of recognizing a plurality of objects according to an embodiment of the present invention.
  • Fig. 4(a) is an exemplary diagram showing the appearance of a plurality of objects (people) entering a store
  • Fig. 4(b) is a picture frame divided into a plurality of grids of the same size, and a bounding box for each object It is an exemplary diagram showing a state in which is formed
  • FIG. 4(c) is an exemplary diagram illustrating a state in which a grid corresponding to the center point of each boundary box is matched.
  • bounding boxes formed for a plurality of objects overlap.
  • the position of the object is recognized based on a coordinate value corresponding to the center point of each bounding box.
  • the coordinate values corresponding to the center point of the bounding box formed for each object are (x1, y1), (x2, y2), and (x3, y3), respectively.
  • Objects are recognized based on a grid (refer to FIG. 4(c)) corresponding to each coordinate value.
  • FIG. 5 is a flowchart further including analyzing the similarity of the object in the step of recognizing an object according to an embodiment of the present invention.
  • the step S200 of recognizing an object may apply a Siamese algorithm, and further includes a step S240 of analyzing the similarity of the object.
  • Step S240 is a step of analyzing the similarity between the object recognized in the first image frame and the object recognized in the second image frame adjacent to the first image frame.
  • the Siamese algorithm is an algorithm that recognizes and classifies objects and analyzes the similarity of the recognized objects for each frame. That is, by calculating the vector value of the recognized object for each frame and analyzing the similarity of the vector value, the recognized object for each frame is matched to have a clustering effect for the same object. Through this, more accurate and effective object tracking is possible.
  • FIG. 6 is a detailed flowchart illustrating a step of recognizing a unit action according to an embodiment of the present invention.
  • a two-stream model may be applied, and the step of extracting an image included in an image frame (S310), an image frame, and And extracting a motion vector from an image frame adjacent to the image frame (S320), and recognizing a behavior based on the extracted image and motion vector (S330).
  • Step S310 is a step of extracting the image 31 included in the image frame of the spatial stream.
  • Step S320 is a step of extracting a specific image frame (image frame corresponding to the image frame from which the image 31 is extracted) from the temporal stream and the motion vector 32 from the image frame adjacent to the image frame before and after the corresponding image frame.
  • Step S320 includes applying an optical filter to the image included in the image frame and vectorizing the behavior of the object from the image to which the optical filter has been applied. That is, an optical filter is applied to an image frame, and a vector value that is a characteristic capable of identifying the behavior of an object is calculated from the image frame to which the optical filter is applied.
  • Step S330 is a step of recognizing the behavior of the object based on the image 31 and the motion vector 32 extracted from the image frame. That is, the action is recognized based on the score obtained by combining the extracted image 31 and the action vector 32.
  • FIG. 7 is an exemplary view showing an image and a motion vector extracted from an image frame according to an embodiment of the present invention.
  • image data is classified into a spatial stream and a temporal stream, an image 31 is extracted from the spatial stream, and a motion vector 32 is extracted from the temporal stream.
  • FIG. 8 is an exemplary diagram illustrating a pre-stored database in which behavior categories are matched for each unit behavior according to an embodiment of the present invention.
  • identification numbers may be assigned to a plurality of preset unit actions, and action categories matching each may be matched to be stored and managed.
  • Multiple unit actions include Entering the store, Walking, Scanning the store, Watching CCTVs, Picking up things, and Putting things. in a pocket), Putting things in a bag, Putting things in a shopping basket, Putting down things, and Standing.
  • identification numbers 1 to 10 may be assigned to each unit action.
  • each of the unit behaviors is matched with the behavior category determined in advance as normal behavior or abnormal behavior.
  • 9 is an exemplary view showing a state in which sequence data is generated by sequentially sequencing a plurality of unit actions according to an embodiment of the present invention.
  • the generation unit 50 of the system 1000 that determines whether an abnormal behavior is based on the behavior sequence is arranged in order to generate behavior sequence data for each object by placing a plurality of unit behaviors recognized for each object.
  • sequence data for each object is generated for each row.
  • the sequence data may include information on an action category of the arranged unit actions. That is, sequence data may be generated by discriminating whether the behavior category of the arranged unit blocks is a normal behavior or an abnormal behavior.
  • the unit behaviors with identification numbers 3, 4, 6, and 7 are classified as behavioral categories abnormal behaviors. Accordingly, the sequence data will contain information on the probability of an abnormal behavior according to the number or ratio of the unit behaviors (unit behaviors shown in shading in Fig. 9) of 3, 4, 6, and 7 are included. I can.
  • FIG. 10 is a flowchart schematically illustrating a process of determining whether an abnormal behavior has occurred according to an embodiment of the present invention.
  • input image data is an object recognition model (shown as model 1), a behavior recognition model (shown as model 2), and a sequence classification model. It proceeds through (shown as model 3) in order.
  • model 1 object recognition model
  • model 2 behavior recognition model
  • model 3 sequence classification model
  • the first input data is image data input through the input unit 10.
  • the image data includes an image of an object's movement over time.
  • the object recognition model recognizes one or more objects from the input image data.
  • the behavior recognition model recognizes the behavior of the recognized object.
  • the generation unit 50 generates sequence data based on the recognized unit behaviors.
  • the sequence classification model finally determines whether the motion of the recognized object corresponds to an abnormal behavior based on the generated sequence data.
  • 11 is an exemplary view showing a state of determining whether an abnormal behavior is based on a ratio occupied by a unit behavior classified as an abnormal behavior according to an embodiment of the present invention.
  • FIG. 11 shows sequence data for each object for each row.
  • the total number of recognized unit actions and the number of each unit action (unit actions 1 to 10 are shown as examples in FIG. 11) are shown.
  • the ratio of the number of each unit action to the number of recognized unit actions and the result of determining whether it corresponds to an abnormal action based on the ratio is shown.
  • the determination unit 60 may determine whether the behavior category is abnormal based on a ratio of the unit behavior classified as the abnormal behavior in the sequence data. As a specific example, if the value of the ratio occupied by one or more unit actions classified as abnormal behaviors is equal to or greater than a preset value, it may be determined as an abnormal behavior. As another specific example, it is determined based on the value of the ratio occupied by one or more unit actions classified as abnormal behaviors, but weights are assigned for each unit behavior, and the result calculated by reflecting the weights is compared with a preset value to determine whether or not abnormal behaviors. I can judge.
  • FIG. 12 is a flowchart illustrating a process of determining whether or not an abnormal behavior is based on a ratio occupied by a unit behavior classified as an abnormal behavior according to an embodiment of the present invention.
  • Step S610 is a step of determining whether a ratio of one or more unit actions classified as abnormal actions relative to the total unit actions in the sequence data is equal to or greater than a preset value.
  • step S620 if the ratio of one or more unit actions classified as abnormal actions relative to the total unit actions is greater than or equal to a preset value, it is determined as an abnormal action.
  • step S630 if the ratio of one or more unit actions classified as abnormal actions relative to the total unit actions is less than a preset value, it is determined as a normal action.
  • 13 is a flowchart further including the step of classifying an object category in a method for determining whether an abnormal behavior is based on an action sequence according to an embodiment of the present invention.
  • the step of classifying an object category for each recognized object compared with FIG. 2 (S250) is further included.
  • the classification unit 40 designates an object category to which the recognized object belongs.
  • the classification unit 40 may classify an object category as a background, a person, or an animal with respect to the recognized object.
  • the classified object category can be used to proceed with the object tracking and behavior recognition process only for objects of interest that require behavior recognition according to the purpose.
  • RAM Random Access Memory
  • ROM Read Only Memory
  • EPROM Erasable Programmable ROM
  • EEPROM Electrically Erasable Programmable ROM
  • Flash Memory hard disk, removable disk, CD-ROM, or It may reside on any type of computer-readable recording medium well known in the art to which the present invention pertains.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un procédé, un programme et un système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement. Le procédé pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement, peut comprendre les étapes, réalisées par un ordinateur, consistant : à recevoir une entrée de données d'image ; à reconnaître un ou plusieurs objets dans les données d'image ; à reconnaître plusieurs comportements unitaires pour chacun des objets reconnus ; à catégoriser les multiples comportements unitaires en tant que comportements normaux ou comportements anormaux ; à générer des données de séquence par séquençage successif des multiples comportements unitaires ; et à déterminer si un comportement anormal se produit, sur la base d'un rapport de comportements unitaires catégorisés en tant que comportements anormaux dans les données de séquence.
PCT/KR2019/016068 2019-11-21 2019-11-21 Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement WO2021100919A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR10-2019-0150252 2019-11-21
KR1020190150252A KR20210062256A (ko) 2019-11-21 2019-11-21 행동 시퀀스 기반으로 이상행동 여부를 판단하는 방법, 프로그램 및 시스템

Publications (1)

Publication Number Publication Date
WO2021100919A1 true WO2021100919A1 (fr) 2021-05-27

Family

ID=75980607

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2019/016068 WO2021100919A1 (fr) 2019-11-21 2019-11-21 Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement

Country Status (2)

Country Link
KR (1) KR20210062256A (fr)
WO (1) WO2021100919A1 (fr)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR102570126B1 (ko) * 2021-07-26 2023-08-22 세종대학교산학협력단 이상 객체 탐지 기반 영상 시놉시스 생성 방법 및 장치
KR20230027479A (ko) * 2021-08-19 2023-02-28 주식회사 유니유니 딥러닝 기반 비식별화 데이터 분석을 통한 이상행동감지 시스템
KR102484412B1 (ko) * 2022-07-26 2023-01-03 주식회사 월드씨앤에스 인공지능을 이용한 택시 배차시스템
KR102484407B1 (ko) * 2022-07-26 2023-01-03 주식회사 월드씨앤에스 인공지능을 이용한 택시 승차확인시스템
KR102662251B1 (ko) * 2023-07-24 2024-04-30 주식회사 이투온 인공지능 기반의 치매 환자 추적 방법

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8619135B2 (en) * 2009-11-30 2013-12-31 Canon Kabushiki Kaisha Detection of abnormal behaviour in video objects
KR20140076815A (ko) * 2012-12-13 2014-06-23 한국전자통신연구원 픽셀 기반 비정상 움직임 검출 방법 및 장치
KR20150100141A (ko) * 2014-02-24 2015-09-02 주식회사 케이티 행동패턴 분석 장치 및 방법

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20190051128A (ko) 2017-11-06 2019-05-15 전자부품연구원 머신러닝 기법을 이용한 행동인지 기반 보행취약자 검출 방법 및 시스템

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8619135B2 (en) * 2009-11-30 2013-12-31 Canon Kabushiki Kaisha Detection of abnormal behaviour in video objects
KR20140076815A (ko) * 2012-12-13 2014-06-23 한국전자통신연구원 픽셀 기반 비정상 움직임 검출 방법 및 장치
KR20150100141A (ko) * 2014-02-24 2015-09-02 주식회사 케이티 행동패턴 분석 장치 및 방법

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAMAL KANT VERMA, BRIJ MOHAN SINGH & AMIT DIXIT: "A review of supervisedand unsupervised machine ; techniques for suspicious behavior recognition in intelligent surveillance system", ORIGINAL RESEARCH, INTERNATIONAL JOURNAL OF INFORMATION TECHNOLOGY, 20 September 2019 (2019-09-20), pages 1 - 14 *
POPOOLA ET AL.: "Video-Based Abnormal Human Behavior Recognition-A Review", IEEE TRANSACTIONS ON SYSTEMS, MAN, AND CYBERNETICS-PART C: APPLICATIONS AND REVIEWS, vol. 42, no. 6, 30 November 2012 (2012-11-30), pages 865 - 878, XP011483369, DOI: 10.1109/TSMCC.2011.2178594 *

Also Published As

Publication number Publication date
KR20210062256A (ko) 2021-05-31

Similar Documents

Publication Publication Date Title
WO2021100919A1 (fr) Procédé, programme et système pour déterminer si un comportement anormal se produit, sur la base d'une séquence de comportement
WO2020040391A1 (fr) Système combiné basé sur un réseau de couches profondes pour une reconnaissance de piétons et une extraction d'attributs
JP6018674B2 (ja) 被写体再識別のためのシステム及び方法
KR100831122B1 (ko) 얼굴 인증 장치, 얼굴 인증 방법, 및 출입 관리 장치
WO2020130309A1 (fr) Dispositif de masquage d'image et procédé de masquage d'image
CN111832457A (zh) 基于云边协同的陌生人入侵检测方法
WO2020196985A1 (fr) Appareil et procédé de reconnaissance d'action vidéo et de détection de section d'action
US20200125923A1 (en) System and Method for Detecting Anomalies in Video using a Similarity Function Trained by Machine Learning
JP4667508B2 (ja) 移動体情報検出装置、移動体情報検出方法および移動体情報検出プログラム
WO2022055023A1 (fr) Système de plateforme d'analyse d'image intelligent intégré ido capable de reconnaître des objets intelligents
WO2022114895A1 (fr) Système et procédé de fourniture de service de contenu personnalisé à l'aide d'informations d'image
KR102511287B1 (ko) 영상 기반 자세 예측 및 행동 검출 방법 및 장치
WO2020032506A1 (fr) Système de détection de vision et procédé de détection de vision l'utilisant
KR101879444B1 (ko) Cctv 구동 방법 및 시스템
KR20200059643A (ko) 영상 분석 기반의 금융자동화기기 보안 시스템 및 그 방법
WO2019035544A1 (fr) Appareil et procédé de reconnaissance faciale par apprentissage
KR101547255B1 (ko) 지능형 감시 시스템의 객체기반 검색방법
WO2023128186A1 (fr) Système et procédé de sécurité d'image basés sur un sous-titrage vidéo multimodal
WO2022019601A1 (fr) Extraction d'un point caractéristique d'un objet à partir d'une image ainsi que système et procédé de recherche d'image l'utilisant
WO2021125539A1 (fr) Dispositif, procédé et programme informatique de classification d'objets présents dans une image
KR102612422B1 (ko) 네트워크를 이용한 고정형 폐쇄회로 영상 엣지 단말의 적응형 객체 인식 장치 및 방법
KR102481215B1 (ko) 통계적 인구 관제를 위한 공간 내 객체감지시스템
Costache et al. Target audience response analysis in out-of-home advertising using computer vision
CN113297976A (zh) 一种基于深度学习的基站入侵检测方法和系统
CN112395922A (zh) 面部动作检测方法、装置及系统

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19953570

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19953570

Country of ref document: EP

Kind code of ref document: A1