CN113628251B - Smart hotel terminal monitoring method - Google Patents

Smart hotel terminal monitoring method Download PDF

Info

Publication number
CN113628251B
CN113628251B CN202111180117.9A CN202111180117A CN113628251B CN 113628251 B CN113628251 B CN 113628251B CN 202111180117 A CN202111180117 A CN 202111180117A CN 113628251 B CN113628251 B CN 113628251B
Authority
CN
China
Prior art keywords
target object
moving target
moving
pixel point
monitoring area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111180117.9A
Other languages
Chinese (zh)
Other versions
CN113628251A (en
Inventor
方兴
杨永斌
闫振宇
饶翔
苏东华
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Zhongke Goldhorse Technology Co ltd
Original Assignee
Beijing Zhongke Goldhorse Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Zhongke Goldhorse Technology Co ltd filed Critical Beijing Zhongke Goldhorse Technology Co ltd
Priority to CN202111180117.9A priority Critical patent/CN113628251B/en
Publication of CN113628251A publication Critical patent/CN113628251A/en
Application granted granted Critical
Publication of CN113628251B publication Critical patent/CN113628251B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/248Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Closed-Circuit Television Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention provides a smart hotel terminal monitoring method, which comprises the steps of carrying out video acquisition at different angles on a monitoring area through a plurality of cameras arranged at different terminal positions in the same monitoring area, carrying out moving target object detection on an acquired video image, and extracting moving target object foreground pixel points; obtaining a boundary curve of a moving target object; establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras at different view angles and different terminal positions, matching and tracking the moving target object, generating a fusion track MI of the moving target object, predicting a moving direction, and allocating a plurality of cameras of adjacent areas in the moving direction for preparation for tracking; and finally, judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, if the behaviors are abnormal, storing a video picture of the abnormal behaviors of the moving target object as an evidence, and simultaneously sending an alarm to an administrator in a background.

Description

Smart hotel terminal monitoring method
Technical Field
The invention relates to the field of business management, in particular to a smart hotel terminal monitoring method.
Background
As a comprehensive system with strong precaution capacity, the intelligent hotel terminal monitoring system always occupies an important position in the intelligent hotel monitoring field. The development of the society and the progress of the era provide increasingly wide application range for monitoring by utilizing terminal monitoring videos. The continuous cross derivation of the terminal video monitoring network and other disciplines endows the video monitoring with a new definition.
The functions of the video monitoring system depend on three technologies of communication, embedding and image processing to a great extent, and in recent years, with the continuous development of the three technologies, the functions of the video monitoring system show a diversified trend. As more and more industries pursue a management mode of cost and refinement, the video monitoring system meets the requirements of functions and technology improvement, and simultaneously takes economy, practicability and stability as important standards for measuring the quality of the video monitoring system. In the prior art, the terminal monitoring system of the smart hotel also has the following problems: a front-end processor of the video monitoring system has low real-time performance and overlarge power consumption; in the video information processing, in the extraction and segmentation of the moving target, the background can be changed due to the illumination, so that the disturbed background is classified as the moving target; in the tracking process of the moving target, if the moving target moves a slight distance, the traditional method is easy to cause target tracking loss; the characteristic extraction is not obvious in the aspect of behavior analysis of the moving target, and the method is a bottleneck of behavior identification.
For example, patent document CN101179707A proposes a method for tracking and measuring a wireless network video image multi-view cooperative target, which implements target measurement in a single wireless video image monitoring node by a dynamic background construction method to obtain a minimum rectangular boundary containing only a target; through cooperation among the nodes, a progressive distributed data fusion method is adopted, and the measurement results of all the wireless video image monitoring nodes are fused, so that the cooperative positioning measurement of the moving target is realized. However, in the technical scheme, a multi-parameter evaluation method based on energy entropy and Mahalanobis distance, such as energy consumption, residual energy, information effectiveness, node characteristics, information feedback and the like, is adopted in the cooperation process, and the application economy, practicability and stability are ignored while the functions and the technology are satisfied.
For another example, patent document US2009324010a1 proposes an automatic tracking and recognition system and method controlled by a neural network, which includes a fixed view field acquisition module, a full-function variable view field acquisition module, a video image recognition algorithm module, a neural network control module, a suspicious target tracking module, a database comparison and alarm judgment module, a monitoring feature recording and rule setting module, a light monitoring module, a backlight module, an alarm output/display/storage module, and a safety monitoring sensor. However, the technical scheme needs to be based on neural network control, is suitable for roughly tracking and identifying in a large range of human flow and vehicle flow, and has insignificant feature extraction on behavior analysis of moving targets.
Disclosure of Invention
In order to solve the technical problem, the invention provides a smart hotel terminal monitoring method, which comprises the following steps:
the method comprises the following steps that firstly, video collection is carried out on a monitoring area through a plurality of cameras with different visual angles, which are arranged at different terminal positions in the same monitoring area, moving target object detection is carried out on collected video images, and foreground pixel points of the moving target object are extracted;
combining all foreground pixel points of the moving target object to obtain a boundary curve of the moving target object;
establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras with different viewing angles and different terminal positions, matching and tracking the moving target object, and performing information fusion by using video image information acquired by a plurality of cameras to generate a fusion track MI of the moving target object;
step four, predicting the moving direction according to the fusion track MI of the moving target object, and allocating a plurality of cameras of adjacent areas in the moving direction to perform tracking preparation;
and step five, finally judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library, determining whether the behaviors are normal behaviors, if the behaviors are abnormal behaviors, storing a video picture of the abnormal behaviors of the moving target object as evidence, and simultaneously sending an alarm to an administrator in a background.
Further, in the first step, a moving target object detection algorithm based on mixed Gaussian foreground modeling is adopted to detect the pixel value I of the current pixel pointtWith the mean value of Gaussian distribution of each background
Figure 575817DEST_PATH_IMAGE001
Making a difference between the absolute value and the standard deviation of the distribution
Figure 100002_DEST_PATH_IMAGE002
Comparing the two times, and judging the foreground pixel points as follows:
Figure 866859DEST_PATH_IMAGE003
(1);
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point;
if the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point.
Further, for gaussian distribution with colors, the foreground pixel points are determined according to the following formula:
Figure 100002_DEST_PATH_IMAGE004
(2) or
Figure 787542DEST_PATH_IMAGE005
(3);
In the formula
Figure 100002_DEST_PATH_IMAGE006
And
Figure 448330DEST_PATH_IMAGE007
is a threshold value; if the pixel value I of the current pixel pointtAnd (3) if the motion foreground pixel point satisfies one of the formulas (2) or (3), judging that the current pixel point is the motion foreground pixel point.
Further, in the second step, the foreground pixel points of the moving target object extracted in the first step are combined to obtain the outline of the moving target object, and the outline is subjected to polygon fitting to obtain a boundary curve.
Further, the polygon fitting specifically includes:
assigning a weight value to each pixel point P (i) on the contour, wherein the weight value is the chord height C (P (i)) of the pixel point P (i), and the chord height is greater than a threshold TCThe pixel point P (i) of (1) is reserved, and the formed point set is P = { P = { (P) }1,P2,…,PmAnd m is the number of pixel points subjected to polygon fitting.
Further, in step three, the process of updating the moving target object template is as follows:
in the same monitoring area, at the time of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving target object moves to the next camera, and the state vector of the moving target object is XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A is a state transition matrix; b is a control matrix, Uk−1And Wk-1And updating the moving target object template according to the formula (4) for the variable quantity of the distance and the variable quantity of the angle between the next camera and the current camera.
Further, in the third step, after the moving target object leaves the monitoring area, in the monitoring area where the moving target object just leaves, converting image coordinates in the monitoring picture captured by the multi-position camera into three-dimensional coordinates in a world coordinate system, and obtaining a fusion track MI of the moving target object in the monitoring area.
Further, in the fourth step, the fusion trajectory MI is divided into a plurality of short line segments, the moving direction of the moving target object on each short line segment is calculated, Dx and Dy are respectively set as the difference between the x coordinate and the y coordinate of the two end points of each short line segment, and the calculation formula of the moving direction orientation (x, y) of the moving target object is as follows:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the orientation (x, y) is the moving direction of the moving target object on the short line segments, the moving direction of the moving target object on each short line segment is predicted through the orientation (x, y), the moving directions of the moving target objects on all the short line segments are combined to form an overall moving path of the moving target object fused with the MI, and accordingly the moving path of the moving target object is continuously monitored in multiple dimensions through the plurality of cameras of the adjacent monitoring areas in the moving direction in a linkage mode.
Further, calculating the boundary gradient of the boundary of the image in the monitoring area, and accurately predicting the moving direction of the moving target object by limiting the gradient amplitude and filtering abnormal values;
the boundary gradient is calculated by the following method:
Figure 100002_DEST_PATH_IMAGE008
(6);
wherein C (u) represents a gradient correlation,
Figure 246391DEST_PATH_IMAGE009
and
Figure 100002_DEST_PATH_IMAGE010
each represents a gradient function of the gray levels of two adjacent image blocks at the boundary of the image, u represents the gray level of the image block, and x represents a complex conjugate.
Further, in the fifth step, if the behavior is artificially determined to be abnormal behavior and is not defined, the abnormal behavior is saved as a sample in the behavior definition library.
Has the advantages that:
according to the intelligent hotel terminal monitoring method, the monitoring area is subjected to video acquisition through the plurality of cameras with different visual angles arranged at different terminal positions in the monitoring area, the acquired video image is subjected to moving target object detection, and finally the behavior of the moving target object is judged according to the moving route of the moving target object in each monitoring area, so that the intelligent hotel terminal system can timely, accurately and quickly identify abnormal conditions at the terminal positions, and the security of hotel management is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings used in the description of the embodiments will be briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art that other drawings can be obtained according to these drawings without inventive labor.
Fig. 1 is a schematic flow chart of the smart hotel terminal monitoring method of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are some embodiments of the present application, but not all embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
Referring to fig. 1, which is a schematic flow chart of the intelligent hotel terminal monitoring method of the present invention, a plurality of cameras with different viewing angles are arranged at different terminal positions in different monitoring sections of a hotel, video acquisition is performed on the monitoring areas through the plurality of cameras arranged at the terminals, and a processor performs detection on the acquired video file to find out the change characteristics of each frame of the video. The specific monitoring method comprises the following steps:
firstly, a plurality of cameras arranged at terminal positions of different visual angles in the same monitoring area are used for carrying out video acquisition of different angles on the monitoring area, and a background server is used for carrying out real-time detection on acquired video files to find out the change characteristics of each frame of picture of the video.
Specifically, a video file acquired by a camera at a terminal position of a moving target object is acquired, a static background picture in the video file is extracted for modeling, so that the background model can be used as a theoretical standard for comparing with a real-time video image, difference calculation is carried out on two frames of image pictures (namely a frame picture for capturing the moving target object and a frame picture when the moving target object does not enter an acquisition area at the previous moment) in the comparison process through a detection algorithm, threshold processing is carried out, pixel points of an image change area (namely the outline of the moving target object and the area inside the outline) are rapidly captured, and discrete pixel points for describing the edge and inside of the change area are combined and marked.
In a preferred embodiment, the method adopts a moving target object detection algorithm based on mixed Gaussian foreground modeling, utilizes a parameter learning mechanism of mixed Gaussian, and uses a Gaussian function with larger weight to describe a background pixel value with high frequency and a Gaussian function with smaller weight to describe a foreground pixel value. Generally, an image is modeled with a mixed gaussian foreground, with a minimum of three gaussian functions. In the Gaussian distribution of the background and the foreground, the background of the pixel points is described by at least two Gaussian functions, and the foreground is described by at least one Gaussian function. In the preferred embodiment, the foreground modeling with 6 gaussian function mixture is selected, so that the moving target object can be distinguished more accurately.
The current pixel value ItWith the mean value of Gaussian distribution of each background
Figure 334432DEST_PATH_IMAGE001
Making a difference between the absolute value and the standard deviation of the distribution
Figure 907191DEST_PATH_IMAGE002
Comparing the two times, and judging the foreground pixel points as follows:
Figure 433987DEST_PATH_IMAGE003
(1);
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point.
If the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point. As long as the pixel value ItMatch any one of the background Gaussian distributions, then ItIs the pixel value corresponding to the background pixel point. For the selection of the parameter D, the value of this embodiment is 3.
In a preferred embodiment, for a Gaussian distribution where there is color, the standard deviation of the Gaussian distribution
Figure 915915DEST_PATH_IMAGE002
If the size is too large or too small, some pixel points will be missed if the judgment is continued according to the formula (1). To make the extracted foreground more sufficient, the present embodiment determines according to the following formula:
Figure 971596DEST_PATH_IMAGE004
(2) or
Figure 303089DEST_PATH_IMAGE005
(3);
In the formula
Figure 102418DEST_PATH_IMAGE006
And
Figure 704431DEST_PATH_IMAGE007
is a threshold value.
Performing an OR operation on the formula (2) or (3), i.e. if the pixel value ItIf one of the formulas (2) or (3) is satisfied, the pixel value I is determinedtThe corresponding pixel points are motion foreground pixel points.
Secondly, in the second step, all the marked motion foreground pixel points in the previous step are combined to obtain the approximate outline of the motion target object, and the outline curve is subjected to polygon fitting to obtain the accurate outline curve, namely the boundary curve, of the motion target object.
Specifically, after the approximate contour C of the moving target object is digitally processed,represented as a sequence of points on a plane: c = { p (i) = (x)i,yi) I =1, 2, ·, n }, where n is the number of pixel points on the image boundary curve. Assuming that P (i), P (i-1) and P (i +1) are three adjacent pixels, defining the chord height of a certain pixel P (i) refers to the vertical distance C (P (i)) from the pixel P (i) to the line segment P (i-1) and P (i +1) with the connecting line of the pixel P (i-1) and P (i +1) as the base, and setting the initial threshold of the chord height as TC
A weight is assigned to each pixel P (i) on the approximate contour, which is the chord height C (P (i)) defined above, and the contribution of the pixel to the shape of the boundary curve is analyzed based on the chord height. If the contribution value is small, deleting the pixel point, keeping the pixel points with larger influence of the shape of the boundary curve, and finally reaching the threshold value TCThe fitting requirements of (1). If the weight value is larger than the threshold value, the algorithm is ended, otherwise, the pixel point with the minimum weight value is deleted, and the steps are repeated. The set of pixel points on the last remaining boundary curve is P = { P = { P }1,P2,…,PmAnd m is the number of pixel points subjected to polygon fitting.
And thirdly, taking the boundary curve fitted in the second step as a moving target object template, after the moving target object template is established, continuously updating the moving target object template in real time according to different visual angles of a plurality of cameras, matching and tracking the moving target object in real time, and performing information fusion by using image frame information acquired by the plurality of cameras, thereby achieving the purpose of estimating the motion track of the moving target object.
Specifically, in the same monitoring area, at the moment of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving target object moves to the next camera, and the state vector of the moving target object is XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A isA state transition matrix; b is a control matrix, Uk−1And Wk-1The change of the distance and the change of the angle between the next camera and the current camera. And updating the moving target object template according to the formula (4).
Establishing an X-Y axis coordinate system by taking a picture frame of a camera acquiring the moving target object at the k-1 moment or the k moment as a two-dimensional plane, and taking a state vector X of the moving target object at the k-1 momentk-1Is a 4-dimensional vector Xk-1=(Sx,k-1、Sy,k-1、Vx,k-1、Vy,k-1)T、Sx,k-1、Sy,k-1The positions of the X axis and the Y axis of the last monitoring area picture at the moment of moving target object k-1 are obtained; vx,k-1、Vy,k-1The component velocity of the moving target object k in the X-axis and Y-axis directions at the moment is shown.
By updating the moving target object template, the method is beneficial to better matching and tracking the moving target object when the cameras with different angles and positions are shot.
When the moving target object leaves the current monitoring area, namely after the moving target object is captured in the adjacent monitoring area, the image information captured by the cameras at multiple positions is subjected to information fusion in the monitoring area where the moving target object just leaves, so that the purpose of estimating the motion track of the moving target object is achieved.
Specifically, a plurality of camera positions in the same monitoring area form a camera network structure, and each camera can convert the image coordinates in the monitoring picture into three-dimensional coordinates in a world coordinate system according to camera calibration information of a detected moving target object. Therefore, it is possible to regard each camera position as one three-dimensional position sensor, and perform the fusion trajectory MI of the target of the multiple cameras on the basis thereof.
And finally, predicting the moving direction of the behavior according to the fused track MI of the target, and allocating a plurality of cameras of adjacent areas in the moving direction to prepare for tracking in advance.
Specifically, according to the third step, in the monitoring area formed by the camera network structure, the fused moving target object moving fusion track MI is formed, so that the moving information of the whole monitoring area can be obtained, wherein because a very large gradient abnormal value is easily generated at the boundary of the monitoring area, in order to accurately predict the moving direction of the behavior, the boundary gradient at the boundary of the monitoring area image is calculated, and the moving direction of the moving target object is predicted by limiting the gradient amplitude and filtering the abnormal value.
In a preferred embodiment, the boundary gradient is calculated as follows:
Figure 337538DEST_PATH_IMAGE008
(6);
here, C (u) represents a gradient correlation,
Figure 907060DEST_PATH_IMAGE009
and
Figure 493768DEST_PATH_IMAGE010
each represents a gradient function of the gray levels of two adjacent image blocks at the boundary of the image, u represents the gray level of the image block, and x represents a complex conjugate. In addition, the larger the gradient correlation value is, the more similar the two image blocks at the boundary are, the gradient threshold value is set, and the image blocks with the gradient correlation value smaller than the gradient threshold value are filtered out. An image block here refers to a number of small areas, called "image blocks", dividing an image window at the edge of an image.
Based on the fusion track MI after the image blocks with abnormal gradients are filtered out, the fusion track MI is divided into a plurality of short line segments, the more the short line segments are, the better the short line segments are, the shorter the short line segments are, the more the short line segments are, the larger the calculated amount is, but the more accurate the calculation result is. Respectively calculating the moving direction of the moving target object on each short line segment, setting Dx and Dy as the difference value of the x coordinate and the y coordinate of two end points of each short line segment, and the calculation formula of the moving direction orientation (x, y) of the moving target object is as follows:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the orientation (x, y) is the moving direction of the moving target object on the short line segment, the signs of Dx and Dy are considered during calculation, the moving direction of each short line segment of the moving target object is predicted through the orientation (x, y), the moving directions of all the short line segments are combined to form an overall moving path of the moving target object of the fusion track MI, and the moving path of the moving target object is continuously monitored in multiple dimensions by linking a plurality of cameras in adjacent monitoring areas in the moving direction.
And step five, finally judging the behaviors of the moving target object according to the track of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library to determine whether the behaviors are normal behaviors, if the behaviors are artificially judged to be abnormal behaviors and are not defined, storing the abnormal behaviors as samples into the behavior definition library, storing the video pictures of the abnormal behaviors of the moving target object as evidences, and simultaneously sending an alarm to an administrator in a background.
According to the intelligent hotel terminal monitoring method, the monitoring area is subjected to video acquisition through the plurality of cameras with different visual angles arranged at different terminal positions in the monitoring area, the acquired video image is subjected to moving target object detection, and finally the behavior of the moving target object is judged according to the moving route of the moving target object in each monitoring area, so that the intelligent hotel terminal system can timely, accurately and quickly identify abnormal conditions at the terminal positions, and the security of hotel management is improved.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents, improvements and the like made within the spirit and principle of the present invention should be included in the scope of the present invention.

Claims (6)

1. A smart hotel terminal monitoring method is characterized by comprising the following steps:
the method comprises the following steps that firstly, video collection is carried out on a monitoring area through a plurality of cameras with different visual angles, which are arranged at different terminal positions in the same monitoring area, moving target object detection is carried out on collected video images, and foreground pixel points of the moving target object are extracted;
combining all foreground pixel points of the moving target object to obtain a boundary curve of the moving target object;
combining the foreground pixel points of the moving target object extracted in the step one to obtain the outline of the moving target object, and performing polygon fitting on the outline to obtain a boundary curve; the polygon fitting specifically comprises:
assigning a weight value to each pixel point P (i) on the contour, wherein the weight value is the chord height C (P (i)) of the pixel point P (i), and the chord height is greater than a threshold TCThe pixel point P (i) of (1) is reserved, and the formed point set is P = { P = { (P) }1,P2,…,PmM is the number of pixel points subjected to polygon fitting;
establishing a moving target object template by using the boundary curve, updating the moving target object template according to cameras with different viewing angles and different terminal positions, matching and tracking the moving target object, and performing information fusion by using video image information acquired by a plurality of cameras to generate a fusion track MI of the moving target object;
step four, predicting the moving direction according to the fusion track MI of the moving target object, and allocating a plurality of cameras of adjacent areas in the moving direction to perform tracking preparation;
dividing the fusion track MI into a plurality of short line segments, respectively calculating the moving direction of the moving target object on each short line segment, setting Dx and Dy as the difference value of the x coordinate and the y coordinate of two end points of each short line segment, and calculating the moving direction orientation (x, y) of the moving target object according to the following formula:
orientation(x,y)=arctan(Dy(x,y)/Dx(x,y)) (5);
the method comprises the following steps that orientation (x, y) is the moving direction of a moving target object on short line segments, the moving direction of the moving target object on each short line segment is predicted through the orientation (x, y), the moving directions of the moving target objects on all the short line segments are combined to form an integral moving route of the moving target object fused with a MI, and accordingly, a plurality of cameras in adjacent monitoring areas in the moving direction are linked to continue multi-dimensionally monitor the moving route of the moving target object;
calculating gradient correlation at the boundary of the image in the monitoring area, and accurately predicting the moving direction of the moving target object by limiting the gradient amplitude and filtering abnormal values;
the gradient correlation is calculated by the following method:
Figure DEST_PATH_IMAGE002
(6);
wherein C (u) represents a gradient correlation,
Figure DEST_PATH_IMAGE004
and
Figure DEST_PATH_IMAGE006
a gradient function respectively representing the gray levels of two adjacent image blocks at the boundary of the image, u representing the gray level of the image block, and x representing a complex conjugate;
and step five, finally judging the behaviors of the moving target object according to the moving route of the moving target object in each monitoring area, comparing the behaviors with the defined behavior modes of the behavior definition library, determining whether the behaviors are normal behaviors, if the behaviors are abnormal behaviors, storing a video picture of the abnormal behaviors of the moving target object as evidence, and simultaneously sending an alarm to an administrator in a background.
2. The intelligent hotel terminal monitoring method as claimed in claim 1, wherein in step one, a moving target object detection algorithm based on mixed Gaussian foreground modeling is adopted to obtain the pixel value I of the current pixel pointtWith the mean value of Gaussian distribution of each background
Figure DEST_PATH_IMAGE008
Making a difference between the absolute value and the standard deviation of the distribution
Figure DEST_PATH_IMAGE010
D times ofIn comparison, the judgment formula of the foreground pixel point is as follows:
Figure DEST_PATH_IMAGE012
(1);
wherein t represents the current frame, t-1 represents the previous frame, and i represents the current pixel point;
if the absolute value is larger than D times of the distribution standard deviation, the pixel point is a foreground pixel point of the moving target object, otherwise, the pixel point is a background pixel point.
3. The intelligent hotel terminal monitoring method as recited in claim 1, wherein the foreground pixels are determined for the presence of gaussian distribution of color according to the following formula:
Figure DEST_PATH_IMAGE014
(2) or
Figure DEST_PATH_IMAGE016
(3);
In the formula
Figure DEST_PATH_IMAGE018
And
Figure DEST_PATH_IMAGE020
is a threshold value; if the pixel value I of the current pixel pointtAnd (3) if the motion foreground pixel point satisfies one of the formulas (2) or (3), judging that the current pixel point is the motion foreground pixel point.
4. The intelligent hotel terminal monitoring method as recited in claim 1, wherein in step three, the process of updating the moving target object template is as follows:
in the same monitoring area, at the time of k-1, a moving target object template of the current camera is established, and the state vector of the moving target object is set as Xk-1At time k, the moving object moves to the next camera, moving objectThe state vector of the object being XkThen, the motion state of the moving target object is calculated according to the following formula:
Xk=AXk-1+BUk−1+Wk-1 (4);
wherein A is a state transition matrix; b is a control matrix, Uk−1And Wk-1And updating the moving target object template according to the formula (4) for the variable quantity of the distance and the variable quantity of the angle between the next camera and the current camera.
5. The intelligent hotel terminal monitoring method as claimed in claim 4, wherein in step three, after the moving target object leaves the monitoring area, in the monitoring area where the moving target object just leaves, the image coordinates in the monitoring picture captured by the multi-position camera are converted into three-dimensional coordinates in the world coordinate system, so as to obtain the fusion track MI of the moving target object in the monitoring area.
6. The intelligent hotel terminal monitoring method as claimed in claim 1, wherein in step five, if the behavior is artificially determined to be abnormal behavior and is not defined, the abnormal behavior is stored as a sample in the behavior definition library.
CN202111180117.9A 2021-10-11 2021-10-11 Smart hotel terminal monitoring method Active CN113628251B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111180117.9A CN113628251B (en) 2021-10-11 2021-10-11 Smart hotel terminal monitoring method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111180117.9A CN113628251B (en) 2021-10-11 2021-10-11 Smart hotel terminal monitoring method

Publications (2)

Publication Number Publication Date
CN113628251A CN113628251A (en) 2021-11-09
CN113628251B true CN113628251B (en) 2022-02-01

Family

ID=78390886

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111180117.9A Active CN113628251B (en) 2021-10-11 2021-10-11 Smart hotel terminal monitoring method

Country Status (1)

Country Link
CN (1) CN113628251B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN116887057B (en) * 2023-09-06 2023-11-14 北京立同新元科技有限公司 Intelligent video monitoring system

Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
CN102004922A (en) * 2010-12-01 2011-04-06 南京大学 High-resolution remote sensing image plane extraction method based on skeleton characteristic
CN202736000U (en) * 2012-03-01 2013-02-13 桂林电子科技大学 Multipoint touch screen system device based on computer visual technique
CN103116875A (en) * 2013-02-05 2013-05-22 浙江大学 Adaptive bilateral filtering de-noising method for images
CN104020751A (en) * 2014-06-23 2014-09-03 河海大学常州校区 Campus safety monitoring system and method based on Internet of Things
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN104244802A (en) * 2012-04-23 2014-12-24 奥林巴斯株式会社 Image processing device, image processing method, and image processing program
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN107273866A (en) * 2017-06-26 2017-10-20 国家电网公司 A kind of human body abnormal behaviour recognition methods based on monitoring system
CN108846335A (en) * 2018-05-31 2018-11-20 武汉市蓝领英才科技有限公司 Wisdom building site district management and intrusion detection method, system based on video image
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN111640104A (en) * 2020-05-29 2020-09-08 研祥智慧物联科技有限公司 Visual detection method for screw assembly
CN111754540A (en) * 2020-06-29 2020-10-09 中国水利水电科学研究院 Slope particle displacement trajectory monitoring real-time tracking method and system
CN112001948A (en) * 2020-07-30 2020-11-27 浙江大华技术股份有限公司 Target tracking processing method and device
CN113326719A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Method, equipment and system for target tracking
CN113362374A (en) * 2021-06-07 2021-09-07 浙江工业大学 High-altitude parabolic detection method and system based on target tracking network

Family Cites Families (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101655357B (en) * 2009-09-11 2011-05-04 南京大学 Method for acquiring phase gradient correlated quality diagram for two-dimensional phase unwrapping
CN103310427B (en) * 2013-06-24 2015-11-18 中国科学院长春光学精密机械与物理研究所 Image super-resolution and quality enhancement method
CN104376564B (en) * 2014-11-24 2018-04-24 西安工程大学 Method based on anisotropic Gaussian directional derivative wave filter extraction image thick edge
CN108038833B (en) * 2017-12-28 2020-10-13 瑞芯微电子股份有限公司 Image self-adaptive sharpening method for gradient correlation detection and storage medium
CN108280838A (en) * 2018-01-31 2018-07-13 桂林电子科技大学 A kind of intermediate plate tooth form defect inspection method based on edge detection
US11138768B2 (en) * 2018-04-06 2021-10-05 Medtronic Navigation, Inc. System and method for artifact reduction in an image
US20200193662A1 (en) * 2018-12-18 2020-06-18 Genvis Pty Ltd Video tracking system and data processing
CN110797034A (en) * 2019-09-23 2020-02-14 重庆特斯联智慧科技股份有限公司 Automatic voice and video recognition intercom system for caring old people and patients
TWI736083B (en) * 2019-12-27 2021-08-11 財團法人工業技術研究院 Method and system for motion prediction

Patent Citations (16)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101144716A (en) * 2007-10-15 2008-03-19 清华大学 Multiple angle movement target detection, positioning and aligning method
CN102004922A (en) * 2010-12-01 2011-04-06 南京大学 High-resolution remote sensing image plane extraction method based on skeleton characteristic
CN202736000U (en) * 2012-03-01 2013-02-13 桂林电子科技大学 Multipoint touch screen system device based on computer visual technique
CN104244802A (en) * 2012-04-23 2014-12-24 奥林巴斯株式会社 Image processing device, image processing method, and image processing program
CN103116875A (en) * 2013-02-05 2013-05-22 浙江大学 Adaptive bilateral filtering de-noising method for images
CN104020751A (en) * 2014-06-23 2014-09-03 河海大学常州校区 Campus safety monitoring system and method based on Internet of Things
CN104125433A (en) * 2014-07-30 2014-10-29 西安冉科信息技术有限公司 Moving object video surveillance method based on multi-PTZ (pan-tilt-zoom)-camera linkage structure
CN105338248A (en) * 2015-11-20 2016-02-17 成都因纳伟盛科技股份有限公司 Intelligent multi-target active tracking monitoring method and system
CN107273866A (en) * 2017-06-26 2017-10-20 国家电网公司 A kind of human body abnormal behaviour recognition methods based on monitoring system
CN108846335A (en) * 2018-05-31 2018-11-20 武汉市蓝领英才科技有限公司 Wisdom building site district management and intrusion detection method, system based on video image
CN110517293A (en) * 2019-08-29 2019-11-29 京东方科技集团股份有限公司 Method for tracking target, device, system and computer readable storage medium
CN113326719A (en) * 2020-02-28 2021-08-31 华为技术有限公司 Method, equipment and system for target tracking
CN111640104A (en) * 2020-05-29 2020-09-08 研祥智慧物联科技有限公司 Visual detection method for screw assembly
CN111754540A (en) * 2020-06-29 2020-10-09 中国水利水电科学研究院 Slope particle displacement trajectory monitoring real-time tracking method and system
CN112001948A (en) * 2020-07-30 2020-11-27 浙江大华技术股份有限公司 Target tracking processing method and device
CN113362374A (en) * 2021-06-07 2021-09-07 浙江工业大学 High-altitude parabolic detection method and system based on target tracking network

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Multi-UAV trajectory planning using gradient-based sequence minimal optimization;QiaoyangXia 等;《Robotics and Autonomous Systems》;20210331;第1-11页 *
基于多层深度卷积特征的抗遮挡实时跟踪算法;崔洲涓 等;《光学学报》;20190731;第0715002-1至0715002-14页 *

Also Published As

Publication number Publication date
CN113628251A (en) 2021-11-09

Similar Documents

Publication Publication Date Title
CN112101433B (en) Automatic lane-dividing vehicle counting method based on YOLO V4 and DeepSORT
US9323991B2 (en) Method and system for video-based vehicle tracking adaptable to traffic conditions
CN111724439B (en) Visual positioning method and device under dynamic scene
CN110400352B (en) Camera calibration with feature recognition
CN104303193B (en) Target classification based on cluster
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
JP4429298B2 (en) Object number detection device and object number detection method
US9224211B2 (en) Method and system for motion detection in an image
Bedruz et al. Real-time vehicle detection and tracking using a mean-shift based blob analysis and tracking approach
Peng et al. Drone-based vacant parking space detection
CN104378582A (en) Intelligent video analysis system and method based on PTZ video camera cruising
CN105160649A (en) Multi-target tracking method and system based on kernel function unsupervised clustering
CN111199556A (en) Indoor pedestrian detection and tracking method based on camera
CN112528974B (en) Distance measuring method and device, electronic equipment and readable storage medium
CN112381132A (en) Target object tracking method and system based on fusion of multiple cameras
CN113469201A (en) Image acquisition equipment offset detection method, image matching method, system and equipment
CN107045630B (en) RGBD-based pedestrian detection and identity recognition method and system
CN113628251B (en) Smart hotel terminal monitoring method
Ghahremannezhad et al. Automatic road detection in traffic videos
CN110636248B (en) Target tracking method and device
CN116824641A (en) Gesture classification method, device, equipment and computer storage medium
JP4918615B2 (en) Object number detection device and object number detection method
CN116664851A (en) Automatic driving data extraction method based on artificial intelligence
Börcs et al. Dynamic 3D environment perception and reconstruction using a mobile rotating multi-beam Lidar scanner
CN115909219A (en) Scene change detection method and system based on video analysis

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant