CN111079694A - Counter assistant job function monitoring device and method - Google Patents
Counter assistant job function monitoring device and method Download PDFInfo
- Publication number
- CN111079694A CN111079694A CN201911384076.8A CN201911384076A CN111079694A CN 111079694 A CN111079694 A CN 111079694A CN 201911384076 A CN201911384076 A CN 201911384076A CN 111079694 A CN111079694 A CN 111079694A
- Authority
- CN
- China
- Prior art keywords
- subsystem
- target
- judging
- person
- 1person
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/20—Movements or behaviour, e.g. gesture recognition
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V2201/00—Indexing scheme relating to image or video recognition or understanding
- G06V2201/07—Target detection
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Multimedia (AREA)
- Data Mining & Analysis (AREA)
- Human Computer Interaction (AREA)
- Computer Vision & Pattern Recognition (AREA)
- General Engineering & Computer Science (AREA)
- Evolutionary Computation (AREA)
- Evolutionary Biology (AREA)
- Bioinformatics & Computational Biology (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Health & Medical Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Psychiatry (AREA)
- Social Psychology (AREA)
- Artificial Intelligence (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a counter assistant job monitoring device and method, which comprises a video frame subsystem and is characterized in that: the video frame subsystem is electrically connected with the target detection subsystem, the target detection subsystem is electrically connected with the target filtering subsystem, the target filtering subsystem is electrically connected with the sleep discrimination subsystem, the mobile phone playing discrimination subsystem, the off-duty discrimination subsystem and the chat discrimination subsystem respectively, and the sleep discrimination subsystem, the mobile phone playing discrimination subsystem, the off-duty discrimination subsystem and the chat discrimination subsystem are electrically connected with the abnormality alarm subsystem respectively. The invention relates to the field of counter assistant equipment, in particular to a counter assistant job monitoring device and method. The invention can accurately position the position of the abnormal behavior and provides a good evaluation standard for behavior judgment.
Description
Technical Field
The invention relates to the field of counter assistant equipment, in particular to a counter assistant job monitoring device and method.
Background
At present, most government service centers in China rely on monitoring cameras in halls for judging and evaluating the job-performing behaviors of office workers, and when abnormal behaviors occur, the abnormal behaviors can only be searched for in time periods from corresponding hard disk video recorders, so that the efficiency is extremely low.
The job monitoring system analyzes whether abnormal behaviors occur or not and gives an alarm by analyzing text data or video data acquired by monitoring, and essentially identifies the types of the action creatures and positions the positions of the action creatures by using the acquired data through a behavior identification method. The actual application scenes are wide, the existing behavior recognition method is implemented by machine learning based on the traditional method, the recognition precision is low, and the existing behavior recognition method is implemented by neural network learning based on deep learning, the recognition precision is high, but the calculated amount is large, the existing behavior recognition method cannot be deployed on embedded terminal equipment to operate, and the real-time requirement cannot be met.
The patent of an organization and function monitoring system (application publication number CN 106682797A) discloses a method, which has the advantages of providing a quantitative measure method for performing index and automatic system evaluation on the function situation, verifying the current situation of the function, constructing a unified, fair and reasonable organization and function evaluation index system and a function situation judgment method with less artificial intervention, and standardizing and restricting the normal development of the business work of organization personnel. The method has the disadvantages that the data acquisition unit of the device only acquires text data, does not use video data as an abnormal behavior judgment basis, cannot identify the behavior of office staff, and is only limited to judge whether the office staff is in a normal working state or not from the completion degree of work.
Patent behavior recognition system (application publication No. CN 101622652 a) provides a behavior recognition method, and this inventive example provides a method and system for analyzing and learning behavior recognition based on an obtained video stream, objects described in the stream are determined based on analysis of video frames, and each object may have a corresponding search model for tracking the motion of the object frame by frame. A classification of the object is determined and a semantic representation of the object is generated. The semantic representation is used to determine the behavior of the object and to learn the behavior occurring in the environment described by the obtained video stream. Thus, the system learns normal and abnormal behaviors quickly and in real time by analyzing the movement, activity or absence of objects in the environment and identifies and predicts abnormal and suspicious behaviors based on what has been learned, the advantage of this patent providing a set of machine learning based behavior identification systems capable of learning behaviors based on information obtained over time, learning to distinguish normal and abnormal behaviors within a scene by analyzing motion or activity over time. The shortcoming of the patent is that the provided behavior recognition system is based on a traditional machine learning method, the recognition precision is low, the current application requirements cannot be met, the calculated amount is large, the requirement on the computing capacity of hardware is high, and the behavior recognition system only runs based on a computer platform at present and cannot perform action positioning on specific abnormal behaviors in a scene.
A patent "training method of behavior recognition model, behavior recognition method, apparatus and device" (application publication No. CN 109815881A) provides a behavior recognition model method relating to the technical field of artificial intelligence, the method comprising: acquiring a training set; respectively detecting a target area of each image; calculating average brightness information of the training set corresponding to the target area; dividing a target area of each image into a foreground and a background, and marking the background as black; adjusting the brightness of the foreground of each image according to the average brightness information to obtain a preprocessed image; inputting the preprocessed image into a neural network for training until the neural network converges, and taking the neural network under the converging condition as a behavior recognition model. The method provided by the patent can eliminate the influence of the background, enhance the robustness of complex scene identification, can be applied to scenes with complex real rays, improve the identification accuracy rate, and avoid the problem of accuracy reduction caused by overexposure or insufficient light. However, the method implemented by the patent has the disadvantages that a large amount of data needs to be acquired in advance according to an application scene, data processing needs to be performed on sample data before the sample data is input into the neural network, the trained model can be recorded and used after the neural network converges, the image detection area is only limited to a face area rather than a whole image, and therefore actions capable of being recognized are only limited to actions related to a face, such as smoking, calling and the like.
Most of the conventional behavior recognition methods relate to the field of artificial intelligence, and a large amount of videos need to be collected in advance according to actual application scenes, and then the videos are processed, for example: the method comprises the following steps of video segmentation, background removal processing and the like, and finally the video is input into a neural network for training until the neural network is converged, behavior segments can be accurately identified under an ideal state, but the actual application is limited by scenes, false alarm and missed alarm are serious, good floor application cannot be realized, action positioning and action identification cannot be simultaneously realized by an existing action identification algorithm, the requirement on the performance of embedded equipment is high, and therefore real-time detection, abnormal actions and alarm information sending are difficult to realize. This is a disadvantage of the prior art.
Disclosure of Invention
The invention aims to solve the technical problem of providing a counter assistant job performing monitoring device and a counter assistant job performing monitoring method, which aim to solve the problem of accurate positioning of abnormal behaviors in recognition and analysis of job performing behaviors, and can detect and send alarm information when the abnormal behaviors occur.
The invention adopts the following technical scheme to realize the purpose of the invention:
a counter assistant job-performing monitoring device comprises a video frame subsystem and is characterized in that: the video frame subsystem is electrically connected with the target detection subsystem, the target detection subsystem is electrically connected with the target filtering subsystem, the target filtering subsystem is electrically connected with the sleep discrimination subsystem, the mobile phone playing discrimination subsystem, the off-duty discrimination subsystem and the chat discrimination subsystem respectively, and the sleep discrimination subsystem, the mobile phone playing discrimination subsystem, the off-duty discrimination subsystem and the chat discrimination subsystem are electrically connected with the abnormality alarm subsystem respectively.
A use method of a counter assistant job-performing monitoring device is characterized by comprising the following steps:
the method comprises the following steps: the video frame subsystem processes video frame data into a JPG data set, the video frame subsystem inputs the JPG data set to the target detection subsystem;
step two: the target detection subsystem generates target sets in the picture according to the JPG data set, and each target set comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) And a target class designation;
step three: the target detection subsystem inputs the target set into the target filtering subsystem, the target filtering subsystem filters the target set to output a new target set, and an area formula (y) is used2-y1)*(x2-x1) Calculating the area size of each target frame, because the device mainly detects abnormal behaviors for close-range targets, targets with longer distances appearing in a picture background must be filtered out, so that the influence on a detection result is reduced, and according to the picture resolution x, setting a threshold value for each target respectively, for example, setting a threshold value for a target class person and setting a threshold value for a target class head;
step four: the target filtering subsystem inputs the new target set into the sleep discrimination subsystem, and the sleep discrimination subsystem classifies people according to the upper left corner (x) of the target set1person,y1person) Coordinates and lower right corner (x)2person,y2person) Coordinates, calculating the width w and height h of the object frame in the category of human, let wperson=x2person-x1person,hperson=y2person-y1personObtaining w of the category target framepersonAnd hpersonAccording to wpersonAnd hpersonThe proportional relation judges whether the target is in a sitting state or a sleeping state which is possibly prone, if the target is in the sitting state, the image is judged to be not sleeping, and if the target is in the sleeping state, the image is further judged to be not sleepingWhether the sleeping bag is in a sleeping state or not, the judgment rule is as follows: for width w in target class personpersonDividing the points into ten equal parts, calculating the size of the abscissa axis x of the coordinates of the points of two equal parts and eight equal parts, and enabling x to be equalLeftperson=x1person+0.2*wperson,xRightperson=x1person+0.8*wpersonThe same method calculates x of the target class headerLeftHeadAnd xRightHeadCompare x separatelyLeftHeadAnd xLeftpersonAnd xRightpersonAnd xRightHeadWhen x is the value of1person<xLeftHead<xLeftpersonOr xRightperson<xRightHead<x2personIf so, judging that the target is in a sleeping state;
step five: the target filtering subsystem inputs the new target set into the mobile phone playing judging subsystem, and the mobile phone playing judging subsystem should firstly judge whether a person, a hand and a mobile phone are simultaneously present according to the target type in the target set, and the three types are judged according to a judgment rule, wherein if the person, the hand and the mobile phone are simultaneously present, the judgment rule is as follows: two coordinates of a person are specified as (x)1person,y1person),(x2person,y2person) The two coordinates of the mobile phone are (x)1phone,y1phone),(x2phone,y2phone);
Step five, first: area calculation for mobile phonephone=(y2person-y1person)*(x2person-x1person);
Step five two: calculating the intersection area of the classified people and the mobile phone, and calculating the coordinates (x) of the upper left corner and the lower right corner of the intersection area1,y1),(x2,y2) Let x1=max(x1person,x1phone),y1=max(y1person,y1phone), x2=min(x2person,x2phone),y2=min(y2person,y2phone) Judgment of x2-x1And y2-y1Positive and negative values of y-y, if anyIf the value is negative, the person of the classification is not intersected with the mobile phone, the normal state is judged, otherwise, the intersection Area is calculatediou=(y2-y1)*(x2-x1) Calculating the proportion p of the intersection Area to the Area of the mobile phone as Areaiou/AreaphoneSetting a threshold thresh to be 0.2, and when p is greater than thresh, determining that the mobile phone is in a playing state;
step six: the target filtering subsystem inputs the new target set into the off-duty judging subsystem, the off-duty judging subsystem should firstly judge whether a person of the category is detected according to the category of the target set, if no person exists, the judgment is carried out, and the judgment rule is as follows: continuously detecting for n times, if the unmanned state exists for more than n/2 times, judging that the vehicle leaves the post, otherwise, judging that the vehicle is in a normal state;
step seven: the target filtering subsystem (5) inputs the new target set (6) into the chat judging subsystem (10), the chat judging subsystem (10) judges whether a person and a head of a category are detected or not according to the target category in the target set (6), if a plurality of persons and heads appear simultaneously, similarity comparison needs to be carried out on image characteristics of the detected head and the detected person before judgment, whether two persons appear in the foreground of a picture instead of the person appearing in the background is screened out, and definition is carried out to determine whether the foreground of the picture really appears two persons instead of the person appearing in the backgroundRepresenting the image feature similarity result, then:
if two persons exist in the picture foreground, judging, wherein the judgment rule is as follows: continuously detecting for n times, if the state of a plurality of people and heads exists for more than n/2 times, judging to be chat, otherwise, judging to be in a normal state;
step eight: if any subsystem of the sleep discrimination subsystem, the mobile phone playing discrimination subsystem, the off-duty discrimination subsystem and the chat discrimination subsystem has an abnormal behavior state, the abnormal alarm subsystem transmits alarm information to the upper application end, and the alarm information comprises: and the upper application end transmits the alarm information to a web server, and an administrator logs in to check the operation by himself.
As a further limitation of the technical solution, the target detection subsystem needs to detect, classify and locate a target in a whole picture, and the process thereof is as follows:
the principle of target positioning: firstly, defining the set of input pictures as { (P)i,Gi)}i=1,..,N,
Wherein:Pirepresenting the ith candidate target detection box with prediction, namely regionproposal;
in the present target detection algorithm, PiThe method is obtained by calculating all group-ways according to a real training set by using a K-means algorithm;
at PiIn (1),representing the x coordinate of the central point of the candidate target frame in the original image;
representing the y coordinate of the central point of the candidate target frame in the original image;
Girepresents the four-dimensional feature of ground-truth, the meaning of which is similar to that of PiIn the same way, then PiAnd GiThe mapping relation of (1) is as follows:
the mapping relation indicates that a mapping function f is to be found when an input P is inputiIs obtained byCan infinitely approach to Gi;
The regression of the bounding box is mapped by using translation transformation and scale transformation, and the calculation formula of the translation transformation is as follows:
the scale transformation is calculated as follows:
wherein: d (P) represents x, y, w, h, and then the characteristics of the image are input into a linear function to solve the 4 changes;
Solving by using a least square method or a gradient descent algorithm, wherein the formula is as follows:
wherein:
when defining the predicted bounding box cx,cyIs relative to picture PxThe position of the upper left corner, the length of each cell is 1, σ (t)x),σ(ty) Are the offsets between 0 and 1, respectively, that are output by sigmoid, so the predicted (x, y, w, h) of the target box is:
generating the target set in the picture, wherein the target set comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate calculation process is as follows:
further the object set information can indicate the object class of each object box.
Compared with the prior art, the invention has the advantages and positive effects that:
the method for identifying the position of the employment behavior combines the prior advanced target detection algorithm and combines the target detection algorithm and the behavior judgment algorithm by means of the embedded mobile platform, so that the method is high in precision and speed, can accurately position the position of the abnormal behavior, and provides a good evaluation standard for behavior judgment.
The behavior recognition method provided by the invention does not need to utilize neural network training for certain behavior acquisition sample data or pre-process the image background, can take the whole image as a target area, judges whether abnormal behaviors occur or not by judging the relation between detection frames of the targets after the targets of a certain image are detected, and can accurately position the abnormal behaviors and send out an alarm when the abnormal behaviors occur.
Drawings
FIG. 1 is a schematic structural diagram of the present invention.
In the figure: 1. the system comprises a video frame subsystem, a JPG data set 2, a target detection subsystem 3, a target set 4, a target set 5, a target filtering subsystem 6, a new target set 7, a sleep judging subsystem 8, a mobile phone playing judging subsystem 9, an off-duty judging subsystem 10, a chat judging subsystem 11 and an abnormity warning subsystem.
Detailed Description
An embodiment of the present invention will be described in detail below with reference to the accompanying drawings, but it should be understood that the scope of the present invention is not limited to the embodiment.
As shown in fig. 1, the present invention includes a video frame subsystem 1, wherein the video frame subsystem 1 is electrically connected to a target detection subsystem 3, the target detection subsystem 3 is electrically connected to a target filtering subsystem 5, the target filtering subsystem 5 is electrically connected to a sleep discrimination subsystem 7, a play mobile phone discrimination subsystem 8, an off-duty discrimination subsystem 9 and a chat discrimination subsystem 10, respectively, and the sleep discrimination subsystem 7, the play mobile phone discrimination subsystem 8, the off-duty discrimination subsystem 9 and the chat discrimination subsystem 10 are electrically connected to an abnormality alarm subsystem 11, respectively.
A use method of a counter assistant job monitoring device comprises the following steps:
the method comprises the following steps: the video frame subsystem 1 processes video frame data into a JPG data set 2, and the video frame subsystem 1 inputs the JPG data set 2 into the target detection subsystem 3;
step two: the target detection subsystem 3 generates target sets 4 in the picture according to the JPG data set 2, and each target set 4 comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) And a target class designation;
step three: the target detection subsystem 3 inputs the set of targets 4 to the targetsIn the filtering subsystem 5, the target filtering subsystem 5 filters the target from the target set 4 to output a new target set 6, and an area formula (y) is used2-y1)*(x2-x1) Calculating the area size of each target frame, because the device mainly detects abnormal behaviors for close-range targets, targets with longer distances appearing in a picture background must be filtered out, so that the influence on a detection result is reduced, and according to the picture resolution size 640x480, setting a threshold value for each target, for example, setting a threshold value for a target category person to be 7000 and setting a threshold value for a target category head to be 5000;
step four: the target filtering subsystem 5 inputs the new target set 6 into the sleep discrimination subsystem 7, and the sleep discrimination subsystem 7 discriminates the upper left corner (x) of the classification people in the target set 6 according to1person,y1person) Coordinates and lower right corner (x)2person,y2person) Coordinates, calculating the width w and height h of the object frame in the category of human, let wperson=x2person-x1person,hperson=y2person-y1personObtaining w of the category target framepersonAnd hpersonAccording to wpersonAnd hpersonThe proportional relation judges whether the target is in a sitting state or a sleeping state which is possibly prone, if the target is in the sitting state, the image is judged to be in a non-sleeping behavior, if the target is in the sleeping state, the image is further judged to be in the sleeping state, and the judgment rule is as follows: for width w in target class personpersonDividing the points into ten equal parts, calculating the size of the abscissa axis x of the coordinates of the points of two equal parts and eight equal parts, and enabling x to be equalLeftperson=x1person+0.2*wperson,xRightperson=x1person+0.8*wpersonThe same method calculates x of the target class headerLeftHeadAnd xRightHeadCompare x separatelyLeftHeadAnd xLeftpersonAnd xRightpersonAnd xRightHeadWhen x is the value of1person<xLeftHead<xLeftpersonOr xRightperson<xRightHead<x2personIf so, judging that the target is in a sleeping state;
step five: the target filtering subsystem 5 inputs the new target set 6 into the mobile phone playing judging subsystem 8, and the mobile phone playing judging subsystem 8 should firstly judge whether a person, a hand and a mobile phone are present at the same time according to the target category in the target set 6, and if so, the judgment rule is as follows: two coordinates of a person are specified as (x)1person,y1person),(x2person,y2person) The two coordinates of the mobile phone are (x)1phone,y1phone),(x2phone,y2phone);
Step five, first: area calculation for mobile phonephone=(y2person-y1person)*(x2person-x1person);
Step five two: calculating the intersection area of the classified people and the mobile phone, and calculating the coordinates (x) of the upper left corner and the lower right corner of the intersection area1,y1),(x2,y2) Let x1=max(x1person,x1phone),y1=max(y1person,y1phone), x2=min(x2person,x2phone),y2=min(y2person,y2phone) Judgment of x2-x1And y2-y1The positive and negative values of y2-y1 indicate that the classmates and the mobile phone have no intersection and are judged to be in a normal state if one value is negative, otherwise, the intersection Area is calculatediou=(y2-y1)*(x2-x1) Calculating the proportion p of the intersection Area to the Area of the mobile phone as Areaiou/AreaphoneSetting a threshold thresh to be 0.2, and when p is greater than thresh, determining that the mobile phone is in a playing state;
step six: the target filtering subsystem 5 inputs the new target set 6 into the off-duty judging subsystem 9, and the off-duty judging subsystem 9 should firstly judge whether a person of the category is detected according to the category of the target in the target set 6, and if no person exists, the judgment is carried out according to the following judgment rules: continuously detecting for n times, if the unmanned state exists for more than n/2 times, judging that the vehicle leaves the post, otherwise, judging that the vehicle is in a normal state;
step seven: the target filtering subsystem (5) inputs the new target set (6) into the chat judging subsystem (10), the chat judging subsystem (10) judges whether a person and a head of a category are detected or not according to the target category in the target set (6), if a plurality of persons and heads appear simultaneously, similarity comparison needs to be carried out on image characteristics of the detected head and the detected person before judgment, whether two persons appear in the foreground of a picture instead of the person appearing in the background is screened out, and definition is carried out to determine whether the foreground of the picture really appears two persons instead of the person appearing in the backgroundRepresenting the image feature similarity result, then:
if two persons exist in the picture foreground, judging, wherein the judgment rule is as follows: continuously detecting for n times, if the state of a plurality of people and heads exists for more than n/2 times, judging to be chat, otherwise, judging to be in a normal state;
step eight: if any subsystem of the sleep discrimination subsystem 7, the mobile phone playing discrimination subsystem 8, the off-duty discrimination subsystem 9 and the chat discrimination subsystem 10 has an abnormal behavior state, the abnormal alarm subsystem 11 transmits alarm information to the upper application end, wherein the alarm information comprises: and the upper application end transmits the alarm information to a web server, and an administrator logs in to check the operation by himself.
The target detection subsystem 3 needs to detect the whole picture, classify and locate the targets in the picture, and the process is as follows:
the principle of target positioning: firstly, defining the set of input pictures as { (P)i,Gi)}i=1,..,N,
Wherein:Pirepresenting the ith candidate target detection box with prediction, namely regionproposal;
in the present target detection algorithm, PiThe method is obtained by calculating all group-ways according to a real training set by using a K-means algorithm;
at PiIn (1),representing the x coordinate of the central point of the candidate target frame in the original image;
representing the y coordinate of the central point of the candidate target frame in the original image;
Girepresents the four-dimensional feature of ground-truth, the meaning of which is similar to that of PiIn the same way, then PiAnd GiThe mapping relation of (1) is as follows:
the mapping relation indicates that a mapping function f is to be found when an input P is inputiIs obtained byCan infinitely approach to Gi;
The regression of the bounding box is mapped by using translation transformation and scale transformation, and the calculation formula of the translation transformation is as follows:
the scale transformation is calculated as follows:
wherein: d*(P) represents x, y, w, h, and then the features of the image are input into a linear function to solve the 4 changes;
Solving by using a least square method or a gradient descent algorithm, wherein the formula is as follows:
wherein:
when defining the predicted bounding box cx,cyIs relative to picture PxThe position of the upper left corner, the length of each cell is 1, σ (t)x),σ(ty) Are the offsets between 0 and 1, respectively, that are output by sigmoid, so the predicted (x, y, w, h) of the target box is:
generating the target set 4 in the picture, wherein the target set comprisesThe following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate calculation process is as follows:
further the object set information can indicate the object class of each object box.
The working process of the invention is as follows: the video frame subsystem 1 processes the video frame data into a JPG data set 2, and the video frame subsystem 1 inputs the JPG data set 2 to the object detection subsystem 3.
The target detection subsystem 3 generates target sets 4 in the picture according to the JPG data set 2, and each target set 4 comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) And a target class designation.
The target detection subsystem 3 needs to detect, classify and locate the targets in the whole picture, and the process is as follows:
the principle of target positioning: firstly, defining the set of input pictures as { (P)i,Gi)}i=1,..,N,
In the present target detection algorithm, PiThe method is obtained by calculating all group-ways according to a real training set by using a K-means algorithm;
Piand GiThe mapping relation of (1) is as follows:
the mapping relation indicates that a mapping function f is to be found when an input P is inputiIs obtained byCan infinitely approach to Gi;
The regression of the bounding box is mapped by using translation transformation and scale transformation, and the calculation formula of the translation transformation is as follows:
the scale transformation is calculated as follows:
wherein d is*(P) represents x, y, w, h, and then the features of the image are input into a linear function to solve the 4 changes;
Solving by using a least square method or a gradient descent algorithm, wherein the formula is as follows:
wherein:
when defining the predicted bounding box cx,cyIs relative to picture PxThe position of the upper left corner, the length of each cell is 1, σ (t)x),σ(ty) Are the offsets between 0 and 1, respectively, that are output by sigmoid, so the predicted (x, y, w, h) of the target box is:
generating the target set 4 in the picture, wherein the target set comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate calculation process is as follows:
further the object set information can indicate the object class of each object box.
The target detection subsystem 3 inputs the target set 4 into the target filtering subsystem 5, the target filtering subsystem 5 filters the target set 4 to output a new target set 6, and an area formula (y) is utilized2-y1)*(x2-x1) The area size of each target frame is calculated, because the device mainly detects abnormal behaviors for close-range targets, targets with longer distances appearing in the background of the picture must be filtered out, and the influence on the detection result is reduced, and according to the picture resolution size 640x480, a threshold is set for each target respectively, for example, a threshold is set for a target category person to be 7000, and a threshold is set for a target category head to be 5000.
The target filtering subsystem 5 inputs a new target set 6 into the sleep discrimination subsystem 7, and the sleep discrimination subsystem 7 discriminates the upper left corner (x) of the people according to the category in the target set 61person,y1person) Coordinates and lower right corner (x)2person,y2person) Coordinates, calculating the width w and height h of the object frame in the category of human, let wperson=x2person-x1person,hperson=y2person-y1personObtaining w of the category target framepersonAnd hpersonAccording to wpersonAnd hpersonThe proportional relation judges whether the target is in a sitting state or a sleeping state which is possibly prone, if the target is in the sitting state, the image is judged to be in a non-sleeping behavior, if the target is in the sleeping state, the image is further judged to be in the sleeping state, and the judgment rule is as follows: for width w in target class personpersonDividing the points into ten equal parts, calculating the size of the abscissa axis x of the coordinates of the points of two equal parts and eight equal parts, and enabling x to be equalLeftperson=x1person+0.2*wperson,xRightperson=x1person+0.8*wpersonThe same method calculates the target class headerX ofLeftHeadAnd xRightHeadCompare x separatelyLeftHeadAnd xLeftpersonAnd xRightpersonAnd xRightHeadWhen x is the value of1person<xLeftHead<xLeftpersonOr xRightperson<xRightHead<x2personAnd then, judging that the target is in a sleeping state.
The target filtering subsystem 5 inputs the new target set 6 into the mobile phone-playing judging subsystem 8, the mobile phone-playing judging subsystem 8 should firstly judge whether a person, a hand and a mobile phone are present at the same time according to the target category in the target set 6, and if the three categories are present, the judgment rule is as follows: two coordinates of a person are specified as (x)1person,y1person),(x2person,y2person) The two coordinates of the mobile phone are (x)1phone,y1phone), (x2phone,y2phone) Calculating Area of mobile phonephone=(y2person-y1person)*(x2person-x1person) Calculating the intersection area of the classified people and the mobile phone, and calculating the coordinates (x) of the upper left corner and the lower right corner of the intersection area1,y1),(x2,y2) Let x1=max(x1person,x1phone),y1=max(y1person,y1phone),x2=min(x2person,x2phone), y2=min(y2person,y2phone) Judgment of x2-x1And y2-y1The positive and negative values of y2-y1 indicate that the classmates and the mobile phone have no intersection and are judged to be in a normal state if one value is negative, otherwise, the intersection Area is calculatediou=(y2-y1)*(x2-x1) Calculating the proportion p of the intersection Area to the Area of the mobile phone as Areaiou/AreaphoneIf p > thresh, it is determined that the mobile phone is in a state of being played.
The target filtering subsystem 5 inputs the new target set 6 into the off-duty judging subsystem 9, the off-duty judging subsystem 9 should firstly judge whether a person of the category is detected according to the category of the target in the target set 6, if no person exists, the judgment is carried out, and the judgment rule is as follows: and continuously detecting for n times, if the unmanned state exists for more than n/2 times, judging that the vehicle leaves the post, and if not, judging that the vehicle is in a normal state.
The new target set 6 is input into the chat judging subsystem 10 by the target filtering subsystem 5, the chat judging subsystem 10 should firstly judge whether the human and head of the category are detected according to the target category in the target set 6, if a plurality of people and heads appear at the same time, the judgment is carried out, before the judgment, the similarity comparison needs to be carried out on the image characteristics of the detected head and human, whether two people really appear in the foreground of the picture instead of the people appearing in the background is screened out, and the definition is carried out to determine whether the foreground of the picture really appears two people instead of the people appearing in the backgroundRepresenting the image feature similarity result, then:
if two persons exist in the picture foreground, judging, wherein the judgment rule is as follows: and continuously detecting for n times, if the state that a plurality of people and heads exist for more than n/2 times appears, judging to be chat, and otherwise, judging to be in a normal state.
If any subsystem of the sleep judging subsystem 7, the mobile phone playing judging subsystem 8, the off-duty judging subsystem 9 and the chat judging subsystem 10 has an abnormal behavior state, the abnormal alarm subsystem 11 transmits alarm information to the upper application end, and the alarm information comprises: and the upper application end transmits the alarm information to a web server, and an administrator logs in to check the operation by himself.
The above disclosure is only for the specific embodiment of the present invention, but the present invention is not limited thereto, and any variations that can be made by those skilled in the art should fall within the scope of the present invention.
Claims (3)
1. A counter assistant job-performing monitoring device comprises a video frame subsystem (1), and is characterized in that:
the video frame subsystem (1) is electrically connected with the target detection subsystem (3);
the target detection subsystem (3) is electrically connected with the target filtering subsystem (5);
the target filtering subsystem (5) is electrically connected with the sleeping discrimination subsystem (7), the mobile phone playing discrimination subsystem (8), the off-duty discrimination subsystem (9) and the chatting discrimination subsystem (10) respectively;
the sleep judging subsystem (7), the mobile phone playing judging subsystem (8), the off-duty judging subsystem (9) and the chat judging subsystem (10) are respectively and electrically connected with an abnormity alarming subsystem (11).
2. A use method of a counter assistant job-performing monitoring device is characterized by comprising the following steps:
the method comprises the following steps: the video frame subsystem (1) processing video frame data into a JPG data set (2), the video frame subsystem (1) inputting the JPG data set (2) to the object detection subsystem (3);
step two: the target detection subsystem (3) generates target sets (4) in the picture according to the JPG data set (2), and each target set (4) comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) And a target class designation;
step three: the target detection subsystem (3) inputs the target set (4) into the target filtering subsystem (5), the target filtering subsystem (5) filters the target set (4) to output a new target set (6), and an area formula (y) is utilized2-y1)*(x2-x1) Calculating the area size of each target frame, because the device mainly detects abnormal behaviors for close-range targets, the targets with longer distance appearing in the background of the picture must be filtered out to reduce the influence on the detection result, and setting a threshold value for each target according to the picture resolution size 640x480, for example, setting the threshold value to 7000 for the target category person and setting the threshold value to 5 for the target category head000;
Step four: the target filtering subsystem (5) inputs the new target set (6) into the sleep discrimination subsystem (7), and the sleep discrimination subsystem (7) discriminates the upper left corner (x) of the people classified in the target set (6)1person,y1person) Coordinates and lower right corner (x)2person,y2person) Coordinates, calculating the width w and height h of the object frame in the category of human, let wperson=x2person-x1person,hperson=y2person-y1personObtaining w of the category target framepersonAnd hpersonAccording to wpersonAnd hpersonThe proportional relation judges whether the target is in a sitting state or a sleeping state which is possibly prone, if the target is in the sitting state, the image is judged to be in a non-sleeping behavior, if the target is in the sleeping state, the image is further judged to be in the sleeping state, and the judgment rule is as follows: for width w in target class personpersonDividing the points into ten equal parts, calculating the size of the abscissa axis x of the coordinates of the points of two equal parts and eight equal parts, and enabling x to be equalLeftperson=x1person+0.2*wperson,xRightperson=x1person+0.8*wpersonThe same method calculates x of the target class headerLeftHeadAnd xRightHeadCompare x separatelyLeftHeadAnd xLeftpersonAnd xRightpersonAnd xRightHeadWhen x is the value of1person<xLeftHead<xLeftpersonOr xRightperson<xRightHead<x2personIf so, judging that the target is in a sleeping state;
step five: the target filtering subsystem (5) inputs the new target set (6) into the mobile phone playing judging subsystem (8), the mobile phone playing judging subsystem (8) should firstly judge whether a person, a hand and a mobile phone appear simultaneously according to the target category in the target set (6), and the three categories are judged according to a judgment rule, if so, the judgment rule is as follows: two coordinates of a person are specified as (x)1person,y1person),(x2person,y2person) Two of mobile phoneEach coordinate is (x)1phone,y1phone),(x2phone,y2phone);
Step five, first: area calculation for mobile phonephone=(y2person-y1person)*(x2person-x1person);
Step five two: calculating the intersection area of the classified people and the mobile phone, and calculating the coordinates (x) of the upper left corner and the lower right corner of the intersection area1,y1),(x2,y2) Let x1=max(x1person,x1phone),y1=max(y1person,y1phone),x2=min(x2person,x2phone),y2=min(y2person,y2phone) Judgment of x2-x1And y2-y1If one value is negative, it indicates that the classmates and the mobile phone do not intersect, and the state is judged to be normal, otherwise, the intersection Area is calculatediou=(y2-y1)*(x2-x1) Calculating the proportion p of the intersection Area to the Area of the mobile phone as Areaiou/AreaphoneSetting a threshold thresh to be 0.2, and when p is greater than thresh, determining that the mobile phone is in a playing state;
step six: the target filtering subsystem (5) inputs the new target set (6) into the off-duty judging subsystem (9), the off-duty judging subsystem (9) should firstly judge whether a person in the category is detected according to the category of the target in the target set (6), if no person exists, the judgment is carried out, and the judgment rule is as follows: continuously detecting for n times, if the unmanned state exists for more than n/2 times, judging that the vehicle leaves the post, otherwise, judging that the vehicle is in a normal state;
step seven: the target filtering subsystem (5) inputs the new target set (6) into the chat judging subsystem (10), the chat judging subsystem (10) judges whether a person and a head of a category are detected or not according to the target category in the target set (6), if a plurality of persons and heads appear at the same time, similarity comparison needs to be carried out on the image characteristics of the detected head and the detected person before judgment, and the prospect of screening pictures is thatWhether two people are actually present, not the one present in the background, is definedRepresenting the image feature similarity result, then:
if two persons exist in the picture foreground, judging, wherein the judgment rule is as follows: continuously detecting for n times, if the state of a plurality of people and heads exists for more than n/2 times, judging to be chat, otherwise, judging to be in a normal state;
step eight: if any subsystem of the sleep judging subsystem (7), the mobile phone playing judging subsystem (8), the off-duty judging subsystem (9) and the chat judging subsystem (10) has an abnormal behavior state, the abnormal alarm subsystem (11) transmits alarm information to the upper application end, and the alarm information comprises: and the upper application end transmits the alarm information to a web server, and an administrator logs in to check the operation by himself.
3. The use method of the counter assistant job performing monitoring device according to claim 2, wherein: the target detection subsystem (3) needs to detect the whole picture, classify and locate the target in the picture, and the process is as follows:
the principle of target positioning: firstly, defining the set of input pictures as { (P)i,Gi)}i=1,..,N,
Wherein:Pirepresenting the i candidate target detection box with prediction, namely region promosanal;
in the present target detection algorithm, PiThe method is obtained by calculating all group-ways according to a real training set by using a K-means algorithm;
at PiIn (1),representing the x coordinate of the central point of the candidate target frame in the original image;
representing the y coordinate of the central point of the candidate target frame in the original image;
Girepresents the four-dimensional feature of ground-truth, the meaning of which is similar to that of PiIn the same way, then PiAnd GiThe mapping relation of (1) is as follows:
the mapping relation indicates that a mapping function f is to be found when an input P is inputiIs obtained byCan infinitely approach to Gi;
The regression of the bounding box is mapped by using translation transformation and scale transformation, and the calculation formula of the translation transformation is as follows:
the scale transformation is calculated as follows:
wherein: d*(P) (. x, y, w, h), then inputting the image features into a linear function to solve for the 4 changes;
Solving by using a least square method or a gradient descent algorithm, wherein the formula is as follows:
wherein:
when defining the predicted bounding box cx,cyIs relative to picture PxThe position of the upper left corner, the length of each cell is 1, σ (t)x),σ(ty) Are the offsets between 0 and 1, respectively, that are output by sigmoid, so the predicted (x, y, w, h) of the target box is:
generating the target set (4) in the picture, wherein the target set comprises the following information: coordinates (x) of upper left corner of each object1,y1) And the coordinates of the lower right corner (x)2,y2) The coordinate calculation process is as follows:
further the object set information can indicate the object class of each object box.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911384076.8A CN111079694A (en) | 2019-12-28 | 2019-12-28 | Counter assistant job function monitoring device and method |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201911384076.8A CN111079694A (en) | 2019-12-28 | 2019-12-28 | Counter assistant job function monitoring device and method |
Publications (1)
Publication Number | Publication Date |
---|---|
CN111079694A true CN111079694A (en) | 2020-04-28 |
Family
ID=70319174
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201911384076.8A Pending CN111079694A (en) | 2019-12-28 | 2019-12-28 | Counter assistant job function monitoring device and method |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN111079694A (en) |
Cited By (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529851A (en) * | 2020-11-27 | 2021-03-19 | 中冶赛迪重庆信息技术有限公司 | Method, system, terminal and medium for determining state of hydraulic pipe |
CN113139530A (en) * | 2021-06-21 | 2021-07-20 | 城云科技(中国)有限公司 | Method and device for detecting sleep post behavior and electronic equipment thereof |
CN113792688A (en) * | 2021-09-18 | 2021-12-14 | 北京市商汤科技开发有限公司 | Business state analysis method and device, electronic equipment and storage medium |
CN113807216A (en) * | 2021-09-01 | 2021-12-17 | 重庆中科云从科技有限公司 | Monitoring early warning method, equipment and computer storage medium |
CN113989499A (en) * | 2021-12-27 | 2022-01-28 | 智洋创新科技股份有限公司 | Intelligent alarm method in bank scene based on artificial intelligence |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010055205A1 (en) * | 2008-11-11 | 2010-05-20 | Reijo Kortesalmi | Method, system and computer program for monitoring a person |
CN104021653A (en) * | 2014-06-12 | 2014-09-03 | 孔秀梅 | Crossing on-duty status video analysis and warning system and method |
CN205899776U (en) * | 2016-07-05 | 2017-01-18 | 梅震 | Monitored control system with correct function |
CN106941602A (en) * | 2017-03-07 | 2017-07-11 | 中国铁道科学研究院 | Trainman's Activity recognition method, apparatus and system |
US20170345181A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
CN107527045A (en) * | 2017-09-19 | 2017-12-29 | 桂林安维科技有限公司 | A kind of human body behavior event real-time analysis method towards multi-channel video |
CN107657626A (en) * | 2016-07-25 | 2018-02-02 | 浙江宇视科技有限公司 | The detection method and device of a kind of moving target |
CN108304802A (en) * | 2018-01-30 | 2018-07-20 | 华中科技大学 | A kind of Quick filter system towards extensive video analysis |
CN109657626A (en) * | 2018-12-23 | 2019-04-19 | 广东腾晟信息科技有限公司 | A kind of analysis method by procedure identification human body behavior |
CN109726652A (en) * | 2018-12-19 | 2019-05-07 | 杭州叙简科技股份有限公司 | A method of based on convolutional neural networks detection operator on duty's sleep behavior |
CN110008867A (en) * | 2019-03-25 | 2019-07-12 | 五邑大学 | A kind of method for early warning based on personage's abnormal behaviour, device and storage medium |
CN110084831A (en) * | 2019-04-23 | 2019-08-02 | 江南大学 | Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3 |
-
2019
- 2019-12-28 CN CN201911384076.8A patent/CN111079694A/en active Pending
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010055205A1 (en) * | 2008-11-11 | 2010-05-20 | Reijo Kortesalmi | Method, system and computer program for monitoring a person |
CN104021653A (en) * | 2014-06-12 | 2014-09-03 | 孔秀梅 | Crossing on-duty status video analysis and warning system and method |
US20170345181A1 (en) * | 2016-05-27 | 2017-11-30 | Beijing Kuangshi Technology Co., Ltd. | Video monitoring method and video monitoring system |
CN205899776U (en) * | 2016-07-05 | 2017-01-18 | 梅震 | Monitored control system with correct function |
CN107657626A (en) * | 2016-07-25 | 2018-02-02 | 浙江宇视科技有限公司 | The detection method and device of a kind of moving target |
CN106941602A (en) * | 2017-03-07 | 2017-07-11 | 中国铁道科学研究院 | Trainman's Activity recognition method, apparatus and system |
CN107527045A (en) * | 2017-09-19 | 2017-12-29 | 桂林安维科技有限公司 | A kind of human body behavior event real-time analysis method towards multi-channel video |
CN108304802A (en) * | 2018-01-30 | 2018-07-20 | 华中科技大学 | A kind of Quick filter system towards extensive video analysis |
CN109726652A (en) * | 2018-12-19 | 2019-05-07 | 杭州叙简科技股份有限公司 | A method of based on convolutional neural networks detection operator on duty's sleep behavior |
CN109657626A (en) * | 2018-12-23 | 2019-04-19 | 广东腾晟信息科技有限公司 | A kind of analysis method by procedure identification human body behavior |
CN110008867A (en) * | 2019-03-25 | 2019-07-12 | 五邑大学 | A kind of method for early warning based on personage's abnormal behaviour, device and storage medium |
CN110084831A (en) * | 2019-04-23 | 2019-08-02 | 江南大学 | Based on the more Bernoulli Jacob's video multi-target detecting and tracking methods of YOLOv3 |
Non-Patent Citations (2)
Title |
---|
于江江;夏锋;: "基于监控视频的前景目标提取" * |
陆芳;魏李婷;: "课堂学习状态智能分析系统的构建及其应用" * |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112529851A (en) * | 2020-11-27 | 2021-03-19 | 中冶赛迪重庆信息技术有限公司 | Method, system, terminal and medium for determining state of hydraulic pipe |
CN112529851B (en) * | 2020-11-27 | 2023-07-18 | 中冶赛迪信息技术(重庆)有限公司 | Hydraulic pipe state determining method, system, terminal and medium |
CN113139530A (en) * | 2021-06-21 | 2021-07-20 | 城云科技(中国)有限公司 | Method and device for detecting sleep post behavior and electronic equipment thereof |
CN113139530B (en) * | 2021-06-21 | 2021-09-03 | 城云科技(中国)有限公司 | Method and device for detecting sleep post behavior and electronic equipment thereof |
CN113807216A (en) * | 2021-09-01 | 2021-12-17 | 重庆中科云从科技有限公司 | Monitoring early warning method, equipment and computer storage medium |
CN113792688A (en) * | 2021-09-18 | 2021-12-14 | 北京市商汤科技开发有限公司 | Business state analysis method and device, electronic equipment and storage medium |
WO2023040233A1 (en) * | 2021-09-18 | 2023-03-23 | 上海商汤智能科技有限公司 | Service state analysis method and apparatus, and electronic device, storage medium and computer program product |
CN113989499A (en) * | 2021-12-27 | 2022-01-28 | 智洋创新科技股份有限公司 | Intelligent alarm method in bank scene based on artificial intelligence |
CN113989499B (en) * | 2021-12-27 | 2022-03-29 | 智洋创新科技股份有限公司 | Intelligent alarm method in bank scene based on artificial intelligence |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN111079694A (en) | Counter assistant job function monitoring device and method | |
CN108053427B (en) | Improved multi-target tracking method, system and device based on KCF and Kalman | |
CN108009473B (en) | Video structuralization processing method, system and storage device based on target behavior attribute | |
CN108446630B (en) | Intelligent monitoring method for airport runway, application server and computer storage medium | |
Colque et al. | Histograms of optical flow orientation and magnitude and entropy to detect anomalous events in videos | |
CN108052859B (en) | Abnormal behavior detection method, system and device based on clustering optical flow characteristics | |
Elhamod et al. | Automated real-time detection of potentially suspicious behavior in public transport areas | |
US20140139633A1 (en) | Method and System for Counting People Using Depth Sensor | |
CN110390229B (en) | Face picture screening method and device, electronic equipment and storage medium | |
US20150262068A1 (en) | Event detection apparatus and event detection method | |
CN110728252B (en) | Face detection method applied to regional personnel motion trail monitoring | |
CN107230267A (en) | Intelligence In Baogang Kindergarten based on face recognition algorithms is registered method | |
CN110263728A (en) | Anomaly detection method based on improved pseudo- three-dimensional residual error neural network | |
CN113192038B (en) | Method for recognizing and monitoring abnormal smoke and fire in existing flame environment based on deep learning | |
CN111027370A (en) | Multi-target tracking and behavior analysis detection method | |
CN111091057A (en) | Information processing method and device and computer readable storage medium | |
Du et al. | Abnormal event detection in crowded scenes based on structural multi-scale motion interrelated patterns | |
Iazzi et al. | Fall detection based on posture analysis and support vector machine | |
CN115862113A (en) | Stranger abnormity identification method, device, equipment and storage medium | |
CN113657250A (en) | Flame detection method and system based on monitoring video | |
CN117475353A (en) | Video-based abnormal smoke identification method and system | |
CN116959099B (en) | Abnormal behavior identification method based on space-time diagram convolutional neural network | |
Laptev et al. | Visualization system for fire detection in the video sequences | |
US20230386185A1 (en) | Statistical model-based false detection removal algorithm from images | |
JP2008140093A (en) | Abnormal event extraction device, abnormal event extraction method, program for the method, and storage medium recording the program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |