CN112560610B - Video monitoring object analysis method, device, equipment and readable storage medium - Google Patents

Video monitoring object analysis method, device, equipment and readable storage medium Download PDF

Info

Publication number
CN112560610B
CN112560610B CN202011411338.8A CN202011411338A CN112560610B CN 112560610 B CN112560610 B CN 112560610B CN 202011411338 A CN202011411338 A CN 202011411338A CN 112560610 B CN112560610 B CN 112560610B
Authority
CN
China
Prior art keywords
sub
coordinate
target
coordinates
line segment
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011411338.8A
Other languages
Chinese (zh)
Other versions
CN112560610A (en
Inventor
刘恒
谢姗
杨柳
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Jiaoda Big Data Technology Co ltd
Southwest Jiaotong University
Original Assignee
Chengdu Jiaoda Big Data Technology Co ltd
Southwest Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Jiaoda Big Data Technology Co ltd, Southwest Jiaotong University filed Critical Chengdu Jiaoda Big Data Technology Co ltd
Priority to CN202011411338.8A priority Critical patent/CN112560610B/en
Publication of CN112560610A publication Critical patent/CN112560610A/en
Application granted granted Critical
Publication of CN112560610B publication Critical patent/CN112560610B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects

Abstract

The invention relates to the technical field of video content analysis, in particular to a video monitoring object analysis method, a video monitoring object analysis device, video monitoring object analysis equipment and a readable storage medium. The motion state and position of the third sub-object are monitored in real time through the Hough line detection algorithm, the state of the object to be analyzed in the video content can be clearly analyzed, the method is particularly suitable for identifying and analyzing the position and the motion state of a moving object in the video, the identification accuracy is high, and a powerful support is provided for industrial control.

Description

Video monitoring object analysis method, device, equipment and readable storage medium
Technical Field
The invention relates to the technical field of video content analysis, in particular to a video monitoring object analysis method, a video monitoring object analysis device, video monitoring object analysis equipment and a readable storage medium.
Background
In recent years, along with the popularization of video shooting equipment, the demand for identifying and analyzing the content shot by the video is larger and wider, and the application is also wider and wider. However, there is still no effective method for analyzing the specific state of the content captured by the video, especially when the captured content is a moving object.
Disclosure of Invention
The invention aims to provide a video monitoring object analysis method, a video monitoring object analysis device, video monitoring object analysis equipment and a readable storage medium, so as to solve the problems.
In order to achieve the above object, the embodiments of the present application provide the following technical solutions:
in one aspect, an embodiment of the present application provides a video monitoring object analysis method, where the method includes: acquiring a first video containing an object to be analyzed; establishing a coordinate system, and acquiring a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object; identifying a target line segment through a Hough line detection algorithm, wherein the target line segment is a straight line representing a third sub-object of the object to be analyzed, the third sub-object is arranged between the first sub-object and the second sub-object, and the third sub-object can reciprocate up and down; determining the coordinates of the center point of the third sub-object through the target line segment; determining a first position of the third sub-object relative to the first and second sub-objects by the center point coordinates.
Optionally, after the obtaining the first video including the object to be analyzed, the method further includes: and cutting the first video, and cutting off the content except the object to be analyzed.
Optionally, after the obtaining the first coordinate of the first sub-object and the second coordinate of the second sub-object of the object to be analyzed, the method further includes: sequentially performing a first preprocessing step, a second preprocessing step and a third preprocessing step on the first video to obtain a preprocessed second video; the first preprocessing step comprises graying processing of the first video, the second preprocessing step comprises binarization processing of the first video and Gaussian filtering of the first video, and the third preprocessing step comprises edge detection of the first video.
Optionally, after the target line segment is identified by the hough line detection algorithm, the method further includes: and detecting the current second position of the third sub-object by a motion detection algorithm, comparing the second position with the target line segment, and excluding a target curve with a larger difference from the second position.
Optionally, the determining, by the target line segment, center point coordinates of the third sub-object includes: constructing a first reference straight line and a second reference straight line, wherein the first reference straight line is a straight line passing through the first sub-coordinate and the second sub-coordinate, and the second reference straight line is a straight line passing through the third sub-coordinate and the fourth sub-coordinate; judging whether the target line segment intersects with both the first reference straight line and the second reference straight line; if the target line segment does not intersect with both the first reference straight line and the second reference straight line, an extension line is made to one end of the reference straight line which does not intersect with the target line segment until the target line segment intersects with both the first reference straight line and the second reference straight line; respectively finding out coordinates of a first target point, a second target point, a third target point and a fourth target point, wherein the first target point is an intersection point of the target line segment and the first reference straight line, the second target point is an intersection point of the target line segment and the second reference straight line, and the third target point and the fourth target point are two end points of the target line segment respectively; finding out two target points in the middle through the coordinates of the first target point, the coordinates of the second target point, the coordinates of the third target point and the coordinates of the fourth target point; and averaging the coordinates of the two target points in the middle to obtain the coordinate of the center point of the third sub-object.
Optionally, after the coordinates of the center point of the third sub-object are determined by the target line segment, the method further includes: calculating the coordinates of the center point of the third sub-object in each frame; sequentially performing first-order difference operation and second-order difference operation on the central point coordinate according to a preset step length to obtain a first-order difference value and a second-order difference value; judging whether the coordinate of the center point of the third sub-object in the current frame is effective or not according to the variable quantity of the first-order differential value and the variable quantity of the second-order differential value; and if the variation of the first-order difference value exceeds a first preset value or the variation of the second-order difference value exceeds a second preset value, determining that the coordinate of the center point of the third sub-object in the current frame is invalid.
Optionally, after determining the first position of the third sub-object relative to the first sub-object and the second sub-object by the center point coordinate, the method further includes: and sending a control command to a motion control device of the third sub-object according to the first position of the third sub-object.
In a second aspect, an embodiment of the present application provides a video monitoring object analysis system, where the system includes a data acquisition module, a first calculation module, a second calculation module, and a third calculation module; the data acquisition module is used for acquiring a first video containing an object to be analyzed; the first calculation module is used for establishing a coordinate system and acquiring a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object; the second calculation module is used for identifying a target line segment through a Hough line detection algorithm, wherein the target line segment is a straight line representing a third sub-object of the object to be analyzed, the third sub-object is arranged between the first sub-object and the second sub-object, and the third sub-object can reciprocate up and down; the third calculation module is used for determining the coordinates of the center point of the third sub-object through the target line segment; determining a first position of the third sub-object relative to the first and second sub-objects by the center point coordinates.
In a third aspect, an embodiment of the present application provides a video surveillance object analysis apparatus, which includes a memory and a processor. The memory is used for storing a computer program; the processor is used for realizing the steps of the video monitoring object analysis method when executing the computer program.
In a fourth aspect, the present application provides a readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the steps of the video surveillance object analysis method are implemented.
The invention has the beneficial effects that:
the motion state and position of the third sub-object are monitored in real time through the Hough line detection algorithm, the state of the object to be analyzed in the video content can be clearly analyzed, the method is particularly suitable for identifying and analyzing the position and the motion state of a moving object in the video, the identification accuracy is high, and a powerful support is provided for industrial control.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by the practice of the embodiments of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims hereof as well as the appended drawings.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a schematic flow chart of a video monitoring object analysis method according to an embodiment of the present invention;
fig. 2 is a schematic structural diagram of a video monitoring object analysis system according to an embodiment of the present invention;
fig. 3 is a schematic structural diagram of a video monitoring object analysis device according to an embodiment of the present invention;
fig. 4 is a schematic flow chart illustrating filtering of the ordinate of the center point according to the embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
It should be noted that: like reference numbers or letters refer to like items in the following figures, and thus, once an item is defined in one figure, it need not be further defined or explained in subsequent figures. Meanwhile, in the description of the present invention, the terms "first", "second", and the like are used only for distinguishing the description, and are not to be construed as indicating or implying relative importance.
Example 1
As shown in fig. 1, the present embodiment provides a video monitored object analysis method including step S1, step S2, step S3, and step S4.
S1, acquiring a first video containing an object to be analyzed;
s2, establishing a coordinate system, and acquiring a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object;
the first sub-object may be a left frame of the object to be analyzed, the second sub-object may be a right frame of the object to be analyzed, the first sub-object and the second sub-object are parallel to each other, and the first sub-object, the second sub-object and the third sub-object are all rectangles with clear boundaries. Two ends of the third sub-object are respectively in contact with the first sub-object and the second sub-object, the first sub-object and the second sub-object are both vertical to the ground, and the third sub-object is parallel to the ground. The coordinate system is a rectangular coordinate system, and the origin of the coordinate system can be coincided with the first sub-coordinate or arranged above the left side of the first sub-coordinate; or the origin of the coordinate system may coincide with the third sub-coordinate, or may be disposed at the upper right of the third sub-coordinate; in addition, the origin of the coordinate system may also be set at another suitable place.
S3, identifying a target line segment through a Hough line detection algorithm, wherein the target line segment is a straight line representing a third sub-object of the object to be analyzed, the third sub-object is arranged between the first sub-object and the second sub-object, and the third sub-object can reciprocate up and down;
s4, determining the coordinates of the central point of the third sub-object through the target line segment; determining a first position of the third sub-object relative to the first and second sub-objects by the center point coordinates.
Optionally, after the step S1, a step S11 may be further included.
And S11, cutting the first video, and cutting off the content except the object to be analyzed.
The cutting method can be as follows: and performing frame selection on the object to be analyzed through a mouse frame function.
Optionally, after the step S2, a step S21 may be further included.
S21, sequentially performing a first preprocessing step, a second preprocessing step and a third preprocessing step on the first video to obtain a preprocessed second video; the first preprocessing step comprises graying processing of the first video, the second preprocessing step comprises binarization processing of the first video and Gaussian filtering of the first video, and the third preprocessing step comprises edge detection of the first video.
By preprocessing the noise area in the target area, the RGB image causes certain interference to the identification, so the target area is subjected to graying and binarization processing.
Optionally, after the step S3, a step S31 may be further included.
And S31, detecting the current second position of the third sub-object through a motion detection algorithm, comparing the second position with the target line segment, and excluding a target curve with a larger difference from the second position. Through the Hough line detection algorithm, a plurality of line segments can be detected, and through the cooperation with the motion detection algorithm, the target line segment can be determined quickly, so that the speed is improved.
Optionally, the step S4 may further include step S411, step S412, step S413, step S414, and step S415.
Step S411, constructing a first reference straight line and a second reference straight line, wherein the first reference straight line is a straight line passing through the first sub-coordinate and the second sub-coordinate, and the second reference straight line is a straight line passing through the third sub-coordinate and the fourth sub-coordinate;
the straight line where the first sub-object is located is taken as a first reference straight line, and the straight line where the second sub-object is located is taken as a second reference straight line;
step S412, judging whether the target line segment intersects with both the first reference straight line and the second reference straight line; if the target line segment does not intersect with both the first reference straight line and the second reference straight line, an extension line is made to one end of the reference straight line which does not intersect with the target line segment until the target line segment intersects with both the first reference straight line and the second reference straight line;
the target line segment may be an inclined line segment, and the target line segment may not intersect with both the first reference line and the second reference line, or may intersect with only one of the first reference line and the second reference line, and the target line segment may intersect with both the first reference line and the second reference line by means of an extension line.
Step 413, respectively finding out coordinates of a first target point, a second target point, a third target point and a fourth target point, wherein the first target point is an intersection point of the target line segment and the first reference straight line, the second target point is an intersection point of the target line segment and the second reference straight line, and the third target point and the fourth target point are two end points of the target line segment;
s414, finding out two target points in the middle through the coordinates of the first target point, the coordinates of the second target point, the coordinates of the third target point and the coordinates of the fourth target point;
and S415, averaging the coordinates of the two target points in the middle to obtain the coordinate of the center point of the third sub-object.
The Y coordinates of the middle two target points are typically averaged.
Optionally, the step S4 may further include a step S421, a step S422, and a step S433.
Step S421, calculating the coordinates of the central point of the third sub-object in each frame;
s422, sequentially performing first-order difference operation and second-order difference operation on the center point coordinate according to a preset step length to obtain a first-order difference value and a second-order difference value;
step 423, judging whether the coordinate of the center point of the third sub-object in the current frame is valid according to the variation of the first-order difference value and the variation of the second-order difference value; and if the variation of the first-order difference value exceeds a first preset value or the variation of the second-order difference value exceeds a second preset value, determining that the coordinate of the center point of the third sub-object in the current frame is invalid.
Optionally, after the step S4, S5 may be further included.
And S5, sending a control command to a motion control device of the third sub-object according to the position of the third sub-object.
When the third sub-object needs to be moved to a specific position in case of emergency, the purpose of controlling the position of the third sub-object can be achieved by sending a control command to the module for controlling the movement of the third sub-object.
In a specific embodiment of this disclosure, current automatic rolling slats door, because can not go to judge current rolling slats door switching degree according to actual demand, so can consume bigger rolling slats door power to a certain extent, consume more resources, also probably when the rolling slats door breaks down, long-range unable real-time judgement can produce the erroneous judgement, consider that the rolling slats door has opened, but actually breaks down, not only lead to the waste of resource, when the unexpected condition takes place, can bring bigger trouble simultaneously, solution problem that can not be timely.
When the food monitoring object analysis method is applied to monitoring the state of the automatic rolling door, the object to be analyzed is the automatic rolling door, the first video is the video containing the automatic rolling door, the first sub-object is the left door frame of the automatic rolling door, the second sub-object is the right door frame of the automatic rolling door, and the third sub-object is the lower frame of the automatic rolling door.
After the video containing the automatic rolling door is obtained, the image is cut through a mouse frame function to obtain a door frame partial image, and door frame coordinates are obtained. Establishing a rectangular coordinate system, wherein coordinates at the top of the left frame are taken as an origin of coordinates, the coordinates at the top of the left frame are (0, 0), the coordinates at the bottom of the left frame are (0, y1), the coordinates at the top of the right frame are (x1, 0), and the coordinates at the bottom of the right frame are (x1, y 1);
the lower border is identified through Hough line detection algorithm, the lower border of the roller shutter door is a straight line with the slope close to zero, the coordinates (x2 and y2) of the center point of the straight line are found, and the position of the lower border of the roller shutter door can be known through the numerical values of y2 and y 1.
In the process of obtaining the coordinate value of the central point of the lower frame, filtering processing is required to be carried out according to the coordinate value of the central point to obtain a final stable value, because the coordinate transformation of the central point is linear change, abnormal values are filtered, the lower frame of the gate is approached by utilizing a linear function, the first-order difference of the coordinate of the central point represents the linear change rate of the longitudinal coordinate of the central point, the value cannot be changed suddenly, so the first-order difference cannot be changed too much, the second-order difference represents the change rate of the linear change rate of the longitudinal coordinate of the central point, the value is changed linearly, the value of the second-order difference is basically unchanged, difference calculation is carried out, the difference value of the central coordinate between two obtained frames is judged, whether the change rate of the difference value is a stable change condition or not is carried out, when the difference value is judged to be unstable, a pre-judgment value obtained according to the stable condition is adopted, because the coordinate of the central point is changed at a constant speed, therefore, the predicted value is easy to obtain.
In the filtering process of the coordinate value of the central point of the lower frame, the statistic of the second order difference value needs to be calculated, the change situation of the second order difference value can be seen through the statistic, and the value of the second order difference value is kept unchanged, so that the vertical coordinate value representing the central point of the lower frame is changed greatly when the second order difference value statistic is larger, and the lower frame cannot be used.
When there is no statistic of the second order difference value, the coordinate value of the center point at that time is given to the result value. When the second order difference value statistic is available, whether the current statistic is smaller than the set fixed value or not is judged, and when the statistic is smaller than the set fixed value, the coordinate value in the current range is reliable and can be used for outputting a result. When the statistic is larger than the fixed value, the current coordinate value is unstable and cannot be used for current output, and prediction needs to be carried out according to the previous value.
As shown in fig. 4, filtering is performed on the ordinate y2 of the central point to remove the abnormal value, when the number of frames is greater than 2, a first order difference operation is performed, when the number of frames is greater than 3, a second order difference operation is performed to determine the change condition of the first order difference, when the number of frames is greater than a certain parameter, the statistics of the second order difference value of the current frame is calculated, and when the change of the second order difference value is found according to statistics, and the parameter is generally greater than or equal to 7, the statistics calculation can be started. Since the ordinate value of the center point varies linearly, the first order difference value does not vary too much, and the second order difference value remains substantially unchanged, so that the larger the statistical amount variation, the larger the value representing y2 at that time varies greatly, and thus it cannot be used. When the statistic is 0, it indicates that the value of y2 is stable and can be output as a result value, when the statistic is not 0, it is determined whether the statistic is smaller than a fixed value, and when the statistic is smaller than the fixed value, it indicates that the coordinate value in the current range can also be output as a result value, and the stable change value is recorded and the predicted value is corrected. When the statistic is unstable, the current result cannot be used as output, and at the moment, prediction is carried out according to the previous value, and the predicted value is output as a result value.
The prior art mainly realizes remote control of opening and closing of the roller shutter door, and the automatic roller shutter door is basically unattended after being electrified, so that the condition cannot be immediately known when the roller shutter door breaks down, the specific opening and closing state of the roller shutter door cannot be monitored in real time, and the roller shutter door cannot be timely judged when the roller shutter door breaks down. This embodiment can long-range real-time detection the on off state of rolling slats door, when the unexpected condition takes place, can let everybody know the condition of knowing the access & exit, the equipment of some remote control rolling slats door of cooperation that can be better saves the power of opening the rolling slats door according to the demand, can be when the rolling slats door body will move upper limit or lower limit height, rise or the falling speed is controlled to the rolling slats door body, is favorable to the protection to the motor and the door body, improves the life of rolling slats door. The safety degree and the convenience of using the roller shutter door in daily life are improved, the intelligent home is promoted to be realized, and the roller shutter door can play an important role in the aspects of fire protection, burglary prevention and the like.
Example 2
As shown in fig. 2, the present embodiment provides a video monitoring object analysis system, which includes a data acquisition module 701, a first calculation module 702, a second calculation module 703 and a third calculation module 704.
A data obtaining module 701, configured to obtain a first video including an object to be analyzed;
a first calculating module 702, configured to establish a coordinate system, and obtain a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object;
a second calculating module 703, configured to identify a target line segment through a hough line detection algorithm, where the target line segment is a straight line representing a third sub-object of the object to be analyzed, the third sub-object is disposed between the first sub-object and the second sub-object, and the third sub-object may reciprocate up and down;
a third calculating module 704, configured to determine coordinates of a center point of the third sub-object through the target line segment; determining the position of the third sub-object relative to the first sub-object and the second sub-object by the center point coordinates.
In a specific embodiment of the present disclosure, the system may further include a clipping module 705 configured to clip the first video and clip contents other than the object to be analyzed.
In a specific embodiment of the present disclosure, the system may further include a preprocessing module 706, configured to sequentially perform a first preprocessing step, a second preprocessing step, and a third preprocessing step on the first video to obtain a preprocessed second video; the first preprocessing step comprises graying processing of the first video, the second preprocessing step comprises binarization processing of the first video and Gaussian filtering of the first video, and the third preprocessing step comprises edge detection of the first video.
In an embodiment of the present disclosure, the system may further include a fourth calculating module 707, configured to detect a current second position of the third sub-object through a motion detection algorithm, compare the second position with the target line segment, and exclude a target curve with a larger difference from the second position.
In a specific embodiment of the present disclosure, the third calculation module 704 may be further configured to perform step S411, step S412, step S413, step S414, and step S415 in the method embodiment.
In a specific embodiment of the present disclosure, the third calculation module 704 may be further configured to perform step S421, step S422, step S423, step S424, and step S425 in the method embodiment.
In a specific embodiment of the present disclosure, the system may further include a fifth calculation module 708, configured to send a control command to the motion control apparatus of the third sub-object according to the first position of the third sub-object.
In a specific embodiment of the present disclosure, the system may further include a control module 709 for controlling the movement of the third sub-object according to the control command sent by the fifth calculation module 708.
It should be noted that, regarding the apparatus in the above embodiment, the specific manner in which each module performs the operation has been described in detail in the embodiment related to the method, and will not be elaborated herein.
Example 3
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a video monitored object analysis device, and a video monitored object analysis device described below and a video monitored object analysis method described above may be referred to in correspondence.
Fig. 3 is a block diagram illustrating a video surveillance object analyzing apparatus 800 according to an exemplary embodiment. As shown in fig. 4, the electronic device 800 may include: a processor 801, a memory 802. The electronic device 800 may also include one or more of a multimedia component 803, an input/output (I/O) interface 804, and a communications component 805.
The processor 801 is configured to control the overall operation of the electronic device 800, so as to complete all or part of the steps in the video monitoring object analysis method. The memory 402 is used to store various types of data to support operation at the electronic device 800, such as instructions for any application or method operating on the electronic device 800 and application-related data, such as contact data, transmitted and received messages, pictures, audio, video, and so forth. The Memory 802 may be implemented by any type of volatile or non-volatile Memory device or combination thereof, such as Static Random Access Memory (SRAM), Electrically Erasable Programmable Read-Only Memory (EEPROM), Erasable Programmable Read-Only Memory (EPROM), Programmable Read-Only Memory (PROM), Read-Only Memory (ROM), magnetic Memory, flash Memory, magnetic disk or optical disk. The multimedia components 803 may include screen and audio components. Wherein the screen may be, for example, a touch screen and the audio component is used for outputting and/or inputting audio signals. For example, the audio component may include a microphone for receiving external audio signals. The received audio signal may further be stored in the memory 802 or transmitted through the communication component 805. The audio assembly also includes at least one speaker for outputting audio signals. The I/O interface 804 provides an interface between the processor 801 and other interface modules, such as a keyboard, mouse, buttons, etc. These buttons may be virtual buttons or physical buttons. The communication component 805 is used for wired or wireless communication between the electronic device 800 and other devices. Wireless communication, such as Wi-Fi, bluetooth, Near Field Communication (NFC), 2G, 3G, or 4G, or a combination of one or more of them, so that the corresponding communication component 805 may include: Wi-Fi module, bluetooth module, NFC module.
In an exemplary embodiment, the electronic Device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, microcontrollers, microprocessors, or other electronic components for performing the above-described video monitoring object analysis method.
In another exemplary embodiment, there is also provided a computer readable storage medium including program instructions, which when executed by a processor, implement the steps of the video surveillance object analysis method described above. For example, the computer readable storage medium may be the memory 802 described above that includes program instructions executable by the processor 801 of the electronic device 800 to perform the video surveillance object analysis method described above.
Example 4
Corresponding to the above method embodiment, the embodiment of the present disclosure further provides a readable storage medium, and a readable storage medium described below and a video monitoring object analysis method described above may be referred to in correspondence.
A readable storage medium, on which a computer program is stored, which, when being executed by a processor, implements the steps of the video surveillance object analysis method of the above-mentioned method embodiment.
The readable storage medium may be a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk or an optical disk, and various other readable storage media capable of storing program codes.
The above description is only a preferred embodiment of the present invention and is not intended to limit the present invention, and various modifications and changes may be made by those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (9)

1. A video monitoring object analysis method is characterized by comprising the following steps:
acquiring a first video containing an object to be analyzed;
establishing a coordinate system, and acquiring a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object;
identifying a target line segment through a Hough line detection algorithm, wherein the target line segment is a line representing a third sub-object of the object to be analyzed;
determining the coordinates of the center point of the third sub-object through the target line segment;
determining a first position of the third sub-object relative to the first and second sub-objects by the center point coordinates;
wherein the determining the center point coordinates of the third sub-object by the target line segment includes:
constructing a first reference straight line and a second reference straight line, wherein the first reference straight line is a straight line passing through the first sub-coordinate and the second sub-coordinate, and the second reference straight line is a straight line passing through the third sub-coordinate and the fourth sub-coordinate;
judging whether the target line segment intersects with both the first reference straight line and the second reference straight line; if the target line segment does not intersect with both the first reference line and the second reference line, an extension line is made to one end of the reference line which does not intersect with the target line segment until the target line segment intersects with both the first reference line and the second reference line;
respectively finding out coordinates of a first target point, a second target point, a third target point and a fourth target point, wherein the first target point is an intersection point of the target line segment and the first reference straight line, the second target point is an intersection point of the target line segment and the second reference straight line, and the third target point and the fourth target point are two end points of the target line segment respectively;
finding out two target points in the middle through the coordinates of the first target point, the coordinates of the second target point, the coordinates of the third target point and the coordinates of the fourth target point;
averaging the coordinates of the two target points in the middle to obtain the coordinate of the center point of the third sub-object;
the first sub-object is a left door frame of the automatic rolling door, the second sub-object is a right door frame of the automatic rolling door, and the third sub-object is a lower frame of the automatic rolling door.
2. The video surveillance object analysis method according to claim 1, wherein after obtaining the first video containing the object to be analyzed, further comprising:
and cutting the first video, and cutting off the content except the object to be analyzed.
3. The video surveillance object analysis method according to claim 1, wherein after obtaining the first coordinates of the first sub-object and the second coordinates of the second sub-object of the object to be analyzed, further comprising:
sequentially performing a first preprocessing step, a second preprocessing step and a third preprocessing step on the first video to obtain a preprocessed second video; the first preprocessing step comprises graying processing of the first video, the second preprocessing step comprises binarization processing of the first video and Gaussian filtering of the first video, and the third preprocessing step comprises edge detection of the first video.
4. The method for analyzing video surveillance objects according to claim 1, wherein after identifying the target line segment by the hough line detection algorithm, the method further comprises:
and detecting the current second position of the third sub-object by a motion detection algorithm, comparing the second position with the target line segment, and excluding a target curve with a larger difference from the second position.
5. The method for analyzing video surveillance objects according to claim 1, wherein after determining the coordinates of the center point of the third sub-object through the target line segment, the method further comprises:
calculating the coordinates of the center point of the third sub-object in each frame;
sequentially performing first-order difference operation and second-order difference operation on the central point coordinate according to a preset step length to obtain a first-order difference value and a second-order difference value;
judging whether the coordinate of the center point of the third sub-object in the current frame is effective or not according to the variable quantity of the first-order differential value and the variable quantity of the second-order differential value; and if the variation of the first-order difference value exceeds a first preset value or the variation of the second-order difference value exceeds a second preset value, determining that the coordinate of the center point of the third sub-object in the current frame is invalid.
6. The video surveillance object analysis method of claim 1, wherein determining the first position of the third sub-object relative to the first and second sub-objects from the center point coordinates further comprises:
and sending a control command to a motion control device of the third sub-object according to the first position of the third sub-object.
7. A video surveillance object analysis system, comprising:
the data acquisition module is used for acquiring a first video containing an object to be analyzed;
the first calculation module is used for establishing a coordinate system and acquiring a first coordinate of a first sub-object and a second coordinate of a second sub-object of the object to be analyzed; the first coordinates comprise a first sub-coordinate and a second sub-coordinate, the first sub-coordinate is a coordinate of the top of the first sub-object, and the second sub-coordinate is a coordinate of the bottom of the first sub-object; the second coordinates comprise a third sub-coordinate and a fourth sub-coordinate, the third sub-coordinate is a coordinate of the top of the second sub-object, and the fourth sub-coordinate is a coordinate of the bottom of the second sub-object;
the second calculation module is used for identifying a target line segment through a Hough line detection algorithm, wherein the target line segment is a straight line representing a third sub-object of the object to be analyzed, the third sub-object is arranged between the first sub-object and the second sub-object, and the third sub-object can reciprocate up and down;
the third calculation module is used for determining the coordinates of the center point of the third sub-object through the target line segment; determining a first position of the third sub-object relative to the first and second sub-objects by the center point coordinates;
wherein the third computing module is further configured to perform the following actions:
constructing a first reference straight line and a second reference straight line, wherein the first reference straight line is a straight line passing through the first sub-coordinate and the second sub-coordinate, and the second reference straight line is a straight line passing through the third sub-coordinate and the fourth sub-coordinate;
judging whether the target line segment intersects with both the first reference straight line and the second reference straight line; if the target line segment does not intersect with both the first reference line and the second reference line, an extension line is made to one end of the reference line which does not intersect with the target line segment until the target line segment intersects with both the first reference line and the second reference line;
respectively finding out coordinates of a first target point, a second target point, a third target point and a fourth target point, wherein the first target point is an intersection point of the target line segment and the first reference straight line, the second target point is an intersection point of the target line segment and the second reference straight line, and the third target point and the fourth target point are two end points of the target line segment respectively;
finding out two target points in the middle through the coordinates of the first target point, the coordinates of the second target point, the coordinates of the third target point and the coordinates of the fourth target point;
averaging the coordinates of the two target points in the middle to obtain the coordinate of the center point of the third sub-object;
the first sub-object is a left door frame of the automatic rolling door, the second sub-object is a right door frame of the automatic rolling door, and the third sub-object is a lower frame of the automatic rolling door.
8. A video surveillance object analyzing apparatus, comprising:
a memory for storing a computer program;
a processor for implementing the steps of the video surveillance object analysis method according to any of claims 1 to 6 when executing the computer program.
9. A readable storage medium, characterized by: the readable storage medium has stored thereon a computer program which, when being executed by a processor, carries out the steps of the video surveillance object analysis method according to any one of claims 1 to 6.
CN202011411338.8A 2020-12-03 2020-12-03 Video monitoring object analysis method, device, equipment and readable storage medium Active CN112560610B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011411338.8A CN112560610B (en) 2020-12-03 2020-12-03 Video monitoring object analysis method, device, equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011411338.8A CN112560610B (en) 2020-12-03 2020-12-03 Video monitoring object analysis method, device, equipment and readable storage medium

Publications (2)

Publication Number Publication Date
CN112560610A CN112560610A (en) 2021-03-26
CN112560610B true CN112560610B (en) 2021-09-28

Family

ID=75048724

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011411338.8A Active CN112560610B (en) 2020-12-03 2020-12-03 Video monitoring object analysis method, device, equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN112560610B (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930594A (en) * 2010-04-14 2010-12-29 山东山大鸥玛软件有限公司 Rapid correction method for scanning document image
CN104270612A (en) * 2014-10-14 2015-01-07 博慧电子科技(漳州)有限公司 Controller WIFI serial port control structure of electric roller shutter door
CN104778699A (en) * 2015-04-15 2015-07-15 西南交通大学 Adaptive object feature tracking method
CN105519102A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Video monitoring method, video monitoring system and computer program product
CN105957114A (en) * 2016-05-09 2016-09-21 北京小米移动软件有限公司 Method and device for detecting polygon in image
CN107264570A (en) * 2017-07-25 2017-10-20 西南交通大学 steel rail light band distribution detecting device and method
CN110409991A (en) * 2019-07-22 2019-11-05 杭州市地铁集团有限责任公司运营分公司 A kind of visualization collection control rolling door system
WO2020090250A1 (en) * 2018-10-31 2020-05-07 ソニー株式会社 Image processing apparatus, image processing method and program
CN111462374A (en) * 2019-01-18 2020-07-28 罗伯特·博世有限公司 Access control system including occupancy estimation
CN112001906A (en) * 2020-08-24 2020-11-27 江苏徐工信息技术股份有限公司 Steel plate weld image detection method combining non-maximum value inhibition

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108222749B (en) * 2017-12-29 2020-10-02 安徽慧视金瞳科技有限公司 Intelligent automatic door control method based on image analysis
CN111325977A (en) * 2020-02-25 2020-06-23 创捷运维智能科技有限责任公司 Tunnel intelligence edge calculation management and control system

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101930594A (en) * 2010-04-14 2010-12-29 山东山大鸥玛软件有限公司 Rapid correction method for scanning document image
CN104270612A (en) * 2014-10-14 2015-01-07 博慧电子科技(漳州)有限公司 Controller WIFI serial port control structure of electric roller shutter door
CN105519102A (en) * 2015-03-26 2016-04-20 北京旷视科技有限公司 Video monitoring method, video monitoring system and computer program product
CN104778699A (en) * 2015-04-15 2015-07-15 西南交通大学 Adaptive object feature tracking method
CN105957114A (en) * 2016-05-09 2016-09-21 北京小米移动软件有限公司 Method and device for detecting polygon in image
CN107264570A (en) * 2017-07-25 2017-10-20 西南交通大学 steel rail light band distribution detecting device and method
WO2020090250A1 (en) * 2018-10-31 2020-05-07 ソニー株式会社 Image processing apparatus, image processing method and program
CN111462374A (en) * 2019-01-18 2020-07-28 罗伯特·博世有限公司 Access control system including occupancy estimation
CN110409991A (en) * 2019-07-22 2019-11-05 杭州市地铁集团有限责任公司运营分公司 A kind of visualization collection control rolling door system
CN112001906A (en) * 2020-08-24 2020-11-27 江苏徐工信息技术股份有限公司 Steel plate weld image detection method combining non-maximum value inhibition

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
《Real-Time Straight-Line Detection for XGA-Size Videos by Hough Transform with Parallelized Voting Procedures》;Jungang Guan等;《Sensors (Basel)》;20170228;第2卷(第17期);第1-14页 *
《复杂环境下结构光中心线提取算法》;杨镇豪 等;《计算机技术与发展》;20180930;第28卷(第09期);第7-10页 *
《轨道交通车站出入口控制系统分析与集成》;沈惠鑫 等;《电子测试》;20171031(第10期);第28-30页 *
《防火卷帘门的分类及其正确使用》;李家乾;《化工管理》;20130430(第04期);第40页 *

Also Published As

Publication number Publication date
CN112560610A (en) 2021-03-26

Similar Documents

Publication Publication Date Title
CN107679471B (en) Indoor personnel air post detection method based on video monitoring platform
CN110996060A (en) Industrial automation intelligent linkage system and method
CN101572803A (en) Customizable automatic tracking system based on video monitoring
KR102107334B1 (en) Method, device and system for determining whether pixel positions in an image frame belong to a background or a foreground
CN109867186B (en) Elevator trapping detection method and system based on intelligent video analysis technology
CN103870824A (en) Method and device for capturing face in face detecting and tracking process
CN114758300A (en) Method, device and equipment for judging malicious shielding of scene and readable storage medium
CN111223263A (en) Full-automatic comprehensive fire early warning response system
CN110569770A (en) Human body intrusion behavior recognition method and device, storage medium and electronic equipment
CN112560610B (en) Video monitoring object analysis method, device, equipment and readable storage medium
CN115691034A (en) Intelligent household abnormal condition warning method, system and storage medium
CN109488383B (en) Intelligent video analysis system and application method thereof
CN111462188A (en) Camera movement detection method and system
CN113158926B (en) High-altitude parabolic track distinguishing method, device, equipment and storage medium
CN111160187A (en) Method, device and system for detecting left-behind object
CN116013009A (en) Intelligent door and window alarm method and system based on Internet of things and readable storage medium
CN111814734B (en) Method for identifying state of disconnecting link
CN113438286B (en) Information pushing method and device, electronic equipment and storage medium
CN114663479A (en) Intelligent monitoring and early warning method and system based on computer vision
WO2020139071A1 (en) System and method for detecting aggressive behaviour activity
CN104168473B (en) A kind of detection method of shear wave interference of video image and device
JPH07129778A (en) Image monitoring device
US20220365673A1 (en) Non-intrusive shared processing method and system for pop-up window
CN117371996B (en) Electric power communication analysis method based on big data
CN116156149B (en) Detection method and device for detecting camera movement

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant