CN116612594A - Intelligent monitoring and outbound system and method based on big data - Google Patents

Intelligent monitoring and outbound system and method based on big data Download PDF

Info

Publication number
CN116612594A
CN116612594A CN202310529418.0A CN202310529418A CN116612594A CN 116612594 A CN116612594 A CN 116612594A CN 202310529418 A CN202310529418 A CN 202310529418A CN 116612594 A CN116612594 A CN 116612594A
Authority
CN
China
Prior art keywords
personnel
track
video stream
sample
behaviors
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310529418.0A
Other languages
Chinese (zh)
Inventor
黄郭成
李海林
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Yunzhiyin Technology Co ltd
Original Assignee
Shenzhen Yunzhiyin Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Yunzhiyin Technology Co ltd filed Critical Shenzhen Yunzhiyin Technology Co ltd
Priority to CN202310529418.0A priority Critical patent/CN116612594A/en
Publication of CN116612594A publication Critical patent/CN116612594A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B13/00Burglar, theft or intruder alarms
    • G08B13/18Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength
    • G08B13/189Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems
    • G08B13/194Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems
    • G08B13/196Actuation by interference with heat, light, or radiation of shorter wavelength; Actuation by intruding sources of heat, light, or radiation of shorter wavelength using passive radiation detection systems using image scanning and comparing systems using television cameras
    • G08B13/19602Image analysis to detect motion of the intruder, e.g. by frame subtraction
    • G08B13/19608Tracking movement of a target, e.g. by detecting an object predefined as a target, using target direction and or velocity to predict its new position
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B3/00Audible signalling systems; Audible personal calling systems
    • G08B3/10Audible signalling systems; Audible personal calling systems using electric transmission; using electromagnetic transmission
    • G08B3/1008Personal calling arrangements or devices, i.e. paging systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02DCLIMATE CHANGE MITIGATION TECHNOLOGIES IN INFORMATION AND COMMUNICATION TECHNOLOGIES [ICT], I.E. INFORMATION AND COMMUNICATION TECHNOLOGIES AIMING AT THE REDUCTION OF THEIR OWN ENERGY USE
    • Y02D10/00Energy efficient computing, e.g. low power processors, power management or thermal management

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Alarm Systems (AREA)

Abstract

The invention provides an intelligent monitoring and outbound call system and method based on big data, comprising the following steps: the system comprises an acquisition module, an analysis module and an outbound module, wherein the acquisition module is used for acquiring a three-dimensional video stream of a target area, the analysis module is used for analyzing the three-dimensional video stream to obtain a personnel track corresponding to each person in the three-dimensional video stream, the personnel track corresponding to each person is transmitted to the analysis module, the analysis module is used for comparing the personnel track with a sample track in a database, determining personnel corresponding to each person in different time periods, and the outbound module is used for controlling a calling device closest to the corresponding person to carry out calling reminding when the personnel behavior belongs to the illegal behavior, judging whether the person in the video has the illegal behavior through analysis of the video stream, and then reminding and advising the illegal person in the video by using an outbound technology under the necessary condition, so that the occurrence of accidents is reduced.

Description

Intelligent monitoring and outbound system and method based on big data
Technical Field
The invention relates to the field of big data, in particular to an intelligent monitoring and outbound system and method based on big data.
Background
With the development of society, monitoring equipment is distributed with streets, enterprises, companies and the like, the occurrence of monitoring can limit adverse social phenomena and crimes and can provide evidence when adverse events occur, but the existing intelligent monitoring can only take a video shooting effect, when an artificial accident occurs in the actual life, people can only analyze the video after the accident occurs, and then corresponding responsibility division is carried out, so that although the reasons for the accident can be determined, the accident cannot be prevented.
Therefore, the invention provides an intelligent monitoring and outbound system and method based on big data.
Disclosure of Invention
According to the intelligent monitoring and outbound system based on big data, whether people in a video have illegal behaviors is judged through analysis of the video stream, and then the illegal people in the video are reminded and recommended by using an outbound technology under the necessary condition, so that accidents are reduced.
The invention provides an intelligent monitoring and outbound system based on big data, which comprises:
the acquisition module is used for acquiring a three-dimensional video stream of the target area;
the analysis module is used for analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream, and transmitting the personnel track corresponding to each personnel to the analysis module;
the analysis module is used for comparing the personnel track with the sample track in the database and determining the personnel behaviors of each personnel in different time periods;
and the outbound module is used for controlling a calling device closest to the corresponding person to carry out calling reminding when the person behavior belongs to the illegal behavior.
In one embodiment of the present invention, in one possible implementation,
the number of the calling devices is multiple, each calling device is connected with the outbound module, and the calling devices are respectively arranged at different area positions of the target area.
In one embodiment of the present invention, in one possible implementation,
the acquisition module comprises:
the first shooting unit is used for shooting a first angle video stream of the target area;
the second shooting unit is used for shooting a second angle video stream of the target area;
and the video fusion unit is used for fusing the first angle video stream and the second angle video stream to obtain the three-dimensional video stream of the target area.
In one embodiment of the present invention, in one possible implementation,
the parsing module comprises:
the video framing unit is used for dividing the three-dimensional video stream into a plurality of frames of three-dimensional images to obtain an image sequence;
the image analysis unit is used for traversing each frame of three-dimensional image by utilizing preset character outlines respectively to obtain a plurality of personnel outlines on each frame of three-dimensional image, acquiring outline features corresponding to each personnel outline, respectively establishing feature labels for each personnel outline to obtain feature sets corresponding to each three-dimensional image, and sequencing the feature sets according to the sequence of the three-dimensional images in the image sequence to obtain a feature set sequence;
the personnel analysis unit is used for inputting the feature set sequence into the cyclic extraction model, obtaining similar feature labels in different feature sets and obtaining a personnel label set;
and the track determining unit is used for searching the corresponding personnel position change in the three-dimensional video stream according to the personnel tag set and establishing a personnel track.
In one embodiment of the present invention, in one possible implementation,
the analysis module comprises:
the track classification unit is used for respectively acquiring the advancing routes corresponding to each personnel track, classifying the personnel tracks according to the end points of the advancing routes and obtaining a plurality of track classes;
the sample comparison unit is used for searching all sample tracks corresponding to each track class in the database respectively, comparing each human track in the same track class with the corresponding sample track, and obtaining the track similarity between each human track and the sample track according to the comparison result;
and the behavior confirmation unit is used for carrying out sample mutual adaptation training on a preset number of sample behaviors by combining the sample behaviors corresponding to each sample track according to the similarity between the personnel track and the different sample tracks, so as to obtain the personnel behaviors of the corresponding personnel in different time periods.
In one embodiment of the present invention, in one possible implementation,
the outbound module comprises:
the behavior judging unit is used for analyzing the behaviors of the personnel to obtain behavior key points of corresponding personnel, and judging whether the behaviors of the personnel belong to illegal behaviors or not according to the behavior key points;
the behavior deepening unit is used for acquiring a violation time period corresponding to the violation when the personnel behavior belongs to the violation, acquiring a violation three-dimensional video corresponding to the violation time period, acquiring a violation image of a corresponding personnel in the violation three-dimensional video, and analyzing the violation image to obtain the violation type of the corresponding personnel;
and the call reminding unit is used for matching corresponding reminding voice according to the violation type and controlling a calling device closest to the corresponding person to carry out call reminding.
In one embodiment of the present invention, in one possible implementation,
the personnel analysis unit includes:
the first analysis subunit is used for inputting the feature set sequence into a cyclic extraction model and respectively counting the first quantity corresponding to the feature labels in each feature set;
the second analysis subunit is used for acquiring a first feature set in the feature set sequence, extracting a first cyclic feature tag from the first feature set, traversing a second feature set by using the first cyclic feature tag, acquiring a first similar tag with the highest matching degree with the first cyclic feature tag from the second feature set, performing cyclic matching to obtain a plurality of similar tags, and establishing a personnel tag set;
and the third analysis subunit is used for marking the matched labels in each feature set respectively to obtain the second number of the matched labels in each feature set, judging whether the first number and the second number corresponding to each feature set are consistent or not, if not, acquiring the non-matched labels in each feature set, and circularly matching the non-matched labels to establish a personnel label set.
In one embodiment of the present invention, in one possible implementation,
the video fusion unit comprises:
the first fusion subunit is used for acquiring the video angle between the first shooting unit and the second shooting unit and establishing a projection domain with the same angle according to the video angle;
and the second fusion subunit is used for inputting the first angle video stream and the second angle video stream into the projection domain for video projection, and carrying out video correction on a projection result to obtain the three-dimensional video stream of the target area.
The invention provides an intelligent monitoring and outbound method based on big data, which is characterized by comprising the following steps:
step 1: collecting a three-dimensional video stream of a target area;
step 2: analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream;
step 3: acquiring a personnel track corresponding to each personnel, comparing the personnel track with a sample track in a database, and determining personnel behaviors corresponding to each personnel in different time periods;
step 4: and when the personnel behaviors belong to illegal behaviors, controlling a calling device closest to the corresponding personnel to carry out calling reminding.
In one embodiment of the present invention, in one possible implementation,
the step 3 includes:
step 31: respectively acquiring a corresponding advancing route of each personnel track, and classifying the personnel tracks according to the end points of the advancing routes to obtain a plurality of track classes;
step 32: searching all sample tracks corresponding to each track class in the database, respectively comparing each human track in the same track class with the corresponding sample track, and obtaining track similarity between each human track and the sample track according to a comparison result;
step 33: according to the similarity between the personnel track and the different sample tracks, combining the sample behaviors corresponding to each sample track to carry out sample mutual adaptation training on a preset number of sample behaviors, and obtaining the personnel behaviors of corresponding personnel in different time periods.
The invention has the beneficial effects that: in order to realize real-time monitoring and outbound reminding, the personnel track corresponding to each personnel in the target area is determined by collecting and analyzing the three-dimensional video stream of the target area, and then the personnel behavior of each personnel is obtained by comparing the three-dimensional video stream with the sample track in big data.
Additional features and advantages of the invention will be set forth in the description which follows, and in part will be obvious from the description, or may be learned by practice of the invention. The objectives and other advantages of the invention will be realized and attained by the structure particularly pointed out in the written description and claims thereof as well as the appended drawings.
The technical scheme of the invention is further described in detail through the drawings and the embodiments.
Drawings
The accompanying drawings are included to provide a further understanding of the invention and are incorporated in and constitute a part of this specification, illustrate the invention and together with the embodiments of the invention, serve to explain the invention. In the drawings:
fig. 1 is a schematic diagram of an intelligent monitoring and outbound system based on big data in an embodiment of the present invention;
fig. 2 is a schematic diagram of the composition of an analysis module of an intelligent monitoring and outbound system based on big data in an embodiment of the present invention;
fig. 3 is a schematic workflow diagram of an intelligent monitoring and outbound method based on big data in an embodiment of the invention.
Detailed Description
The preferred embodiments of the present invention will be described below with reference to the accompanying drawings, it being understood that the preferred embodiments described herein are for illustration and explanation of the present invention only, and are not intended to limit the present invention.
Example 1
The embodiment provides an intelligent monitoring and outbound system based on big data, as shown in fig. 1, including:
the acquisition module is used for acquiring a three-dimensional video stream of the target area;
the analysis module is used for analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream, and transmitting the personnel track corresponding to each personnel to the analysis module;
the analysis module is used for comparing the personnel track with the sample track in the database and determining the personnel behaviors of each personnel in different time periods;
and the outbound module is used for controlling a calling device closest to the corresponding person to carry out calling reminding when the person behavior belongs to the illegal behavior.
In this example, the target area may be any area that the user wants to monitor;
in this example, the person track represents a track generated when a corresponding person moves in a target area, and one target area may include a person track of one or more persons;
in this example, the database is derived from the network big data and updated in real time as the network big data is updated;
in this example, the violations include both violations and bad behaviors, where the violations represent the corresponding person's behavior to violate the law, such as: robbery, fire, etc., while bad behavior indicates that the corresponding person initially violates ethical behavior, such as: spit anywhere, turn over the railing, etc.;
in the example, the function of the call reminding is to remind the offender not to continue the violation on one hand, and on the other hand, surrounding personnel can be reminded to timely prevent the behavior of the offender.
The working principle of the technical scheme has the beneficial effects that: in order to realize real-time monitoring and outbound reminding, the personnel track corresponding to each personnel in the target area is determined by collecting and analyzing the three-dimensional video stream of the target area, and then the personnel behavior of each personnel is obtained by comparing the three-dimensional video stream with the sample track in big data.
Example 2
Based on embodiment 1, the intelligent monitoring and outbound system based on big data:
the number of the calling devices is multiple, each calling device is connected with the outbound module, and the calling devices are respectively arranged at different area positions of the target area.
The working principle of the technical scheme has the beneficial effects that: the plurality of calling devices are arranged in the target area, so that the calling device can call out at any time, and the voice is played through the calling device, thereby realizing voice reminding.
Example 3
On the basis of embodiment 1, the intelligent monitoring and outbound system based on big data, the acquisition module comprises:
the first shooting unit is used for shooting a first angle video stream of the target area;
the second shooting unit is used for shooting a second angle video stream of the target area;
and the video fusion unit is used for fusing the first angle video stream and the second angle video stream to obtain the three-dimensional video stream of the target area.
The working principle of the technical scheme has the beneficial effects that: in order to improve the accuracy of monitoring, errors are avoided when personnel behaviors are analyzed, two shooting units are arranged for shooting, and then two angle video streams are fused to obtain a three-dimensional video stream, so that a foundation is formed for subsequent personnel analysis.
Example 4
On the basis of embodiment 1, the intelligent monitoring and outbound system based on big data, as shown in fig. 2, the parsing module includes:
the video framing unit is used for dividing the three-dimensional video stream into a plurality of frames of three-dimensional images to obtain an image sequence;
the image analysis unit is used for traversing each frame of three-dimensional image by utilizing preset character outlines respectively to obtain a plurality of personnel outlines on each frame of three-dimensional image, acquiring outline features corresponding to each personnel outline, respectively establishing feature labels for each personnel outline to obtain feature sets corresponding to each three-dimensional image, and sequencing the feature sets according to the sequence of the three-dimensional images in the image sequence to obtain a feature set sequence;
the personnel analysis unit is used for inputting the feature set sequence into the cyclic extraction model, obtaining similar feature labels in different feature sets and obtaining a personnel label set;
and the track determining unit is used for searching the corresponding personnel position change in the three-dimensional video stream according to the personnel tag set and establishing a personnel track.
In this example, the preset character outline represents a character figure established by using a line mode, wherein the character figure comprises a character head replaced by a circle and a character hand and foot replaced by a straight line;
in this example, the order of arrangement of the three-dimensional images in the image sequence is consistent with the progress of the video stream;
in this example, the profile features include profile height, profile color, profile width for each human profile;
in this example, the feature labels are labels for distinguishing contours established according to differences between different contour features;
in this example, the feature set represents a set of feature labels on a three-dimensional image;
in this example, the feature set sequence represents a sequence obtained by ordering corresponding feature sets on different three-dimensional images, wherein the order of arrangement of the feature sets is consistent with the order of the image sequence.
The working principle of the technical scheme has the beneficial effects that: the three-dimensional video stream is framed, and then the personnel outline is acquired on the three-dimensional image, so that whether a person exists in a target area can be determined, then the characteristic label is built according to the personnel outline of the same person on different three-dimensional images, and the label of the same person on different three-dimensional images can be obtained by utilizing the label consistency, so that a personnel label set is built, then the corresponding personnel position is searched in the three-dimensional video stream, the personnel track is built, and the basis is provided for subsequent personnel behavior determination.
Example 5
On the basis of embodiment 1, the intelligent monitoring and outbound system based on big data, the analysis module comprises:
the track classification unit is used for respectively acquiring the advancing routes corresponding to each personnel track, classifying the personnel tracks according to the end points of the advancing routes and obtaining a plurality of track classes;
the sample comparison unit is used for searching all sample tracks corresponding to each track class in the database respectively, comparing each human track in the same track class with the corresponding sample track, and obtaining the track similarity between each human track and the sample track according to the comparison result;
and the behavior confirmation unit is used for carrying out sample mutual adaptation training on a preset number of sample behaviors by combining the sample behaviors corresponding to each sample track according to the similarity between the personnel track and the different sample tracks, so as to obtain the personnel behaviors of the corresponding personnel in different time periods.
In this example, the forward route represents the forward direction of the person and the forward path;
in this example, the end of the forward path represents the path taken by the person within the target area, such as: person a walks a first path in the target area and person b and person c walk a second path in the target area;
in this example, the track class represents the person track corresponding to each path;
in this example, the sample trajectory represents a non-offending trajectory as it progresses through a path;
in this example, the trajectory similarity represents the degree of coincidence between the person trajectory and the sample trajectory;
in this example, the preset number may be 100.
The working principle of the technical scheme has the beneficial effects that: the method comprises the steps of determining the advancing end point of a person by analyzing the person track, classifying the person track with consistent end points into a plurality of track types, comparing samples according to each track type, obtaining the similarity between the person track and each sample track according to a comparison result, performing mutual adaptation training, obtaining the behaviors of the person in different time periods, and analyzing the positions of the person because different facilities are arranged at different positions in a target area, wherein the same behavior of the person at different positions does not necessarily have the same behavior property.
Example 6
On the basis of embodiment 1, the intelligent monitoring and outbound system based on big data, the outbound module comprises:
the behavior judging unit is used for analyzing the behaviors of the personnel to obtain behavior key points of corresponding personnel, and judging whether the behaviors of the personnel belong to illegal behaviors or not according to the behavior key points;
the behavior deepening unit is used for acquiring a violation time period corresponding to the violation when the personnel behavior belongs to the violation, acquiring a violation three-dimensional video corresponding to the violation time period, acquiring a violation image of a corresponding personnel in the violation three-dimensional video, and analyzing the violation image to obtain the violation type of the corresponding personnel;
and the call reminding unit is used for matching corresponding reminding voice according to the violation type and controlling a calling device closest to the corresponding person to carry out call reminding.
In this example, the behavioral keypoints include: a person stopping point, a person rapid moving point and a point that the change speed of the hand action of the person is greater than a preset speed, wherein the preset speed is 20 times/min;
in the example, the violation time period represents a corresponding time period when a violation person in the target area performs a violation;
in the example, the violation image is derived from a violation three-dimensional video and is obtained by intercepting an image of a violation person on the violation three-dimensional video when the violation person performs a violation;
in this example, the types of violations include: both general types, which mean that the person's behaviour is illegal at any location within the target area, and specific types, for example: person c is all offending in the target area anywhere, and the specific type indicates that the person's behavior is offending at some specific location, for example: person d is not illegal when throwing garbage in front of the garbage can, while person e is illegal when throwing garbage in an open place;
in this example, the alert voices corresponding to different violation types are different;
in this example, the call reminding process of controlling the calling device closest to the corresponding person means that the call device closest to the person is automatically called through the computer, and the recorded voice is amplified and played through the calling device.
The working principle of the technical scheme has the beneficial effects that: in order to further analyze the behaviors of each person in the target area, analyze the behaviors of the person to obtain the behavior key points of the behaviors of the person, then judge whether the behaviors are illegal or not, intercept illegal images in the illegal three-dimensional video stream according to the illegal time period of the behaviors under the necessary condition, analyze the illegal types of the behaviors, match corresponding reminding voices and remind the behaviors, select a calling device closest to the illegal person to play the voices during reminding, so that the behaviors of the person can be reminded according to the particularities of different positions in the target area, and the trust degree of users to the system is prevented from being influenced by reminding errors.
Example 7
On the basis of embodiment 4, the intelligent monitoring and outbound system based on big data, the personnel analysis unit includes:
the first analysis subunit is used for inputting the feature set sequence into a cyclic extraction model and respectively counting the first quantity corresponding to the feature labels in each feature set;
the second analysis subunit is used for acquiring a first feature set in the feature set sequence, extracting a first cyclic feature tag from the first feature set, traversing a second feature set by using the first cyclic feature tag, acquiring a first similar tag with the highest matching degree with the first cyclic feature tag from the second feature set, performing cyclic matching to obtain a plurality of similar tags, and establishing a personnel tag set;
and the third analysis subunit is used for marking the matched labels in each feature set respectively to obtain the second number of the matched labels in each feature set, judging whether the first number and the second number corresponding to each feature set are consistent or not, if not, acquiring the non-matched labels in each feature set, and circularly matching the non-matched labels to establish a personnel label set.
In this example, the first number represents the number of feature tags in each feature set;
in this example, the first feature set represents a first feature set in the sequence of feature sets;
in this example, the first cyclic feature tag represents a feature tag in a first feature set;
in this example, the first class label represents a label that represents the same person as the first cyclic feature label, but is not in the same feature set;
in this example, the second number represents the number of tags in one feature set that have been included in the personnel tag set.
The working principle of the technical scheme has the beneficial effects that: because the people in the target area have different deductions and the time of the coming and going of different people is different, when the personnel tag set is established, firstly, a part of personnel tag sets are established according to the fact that the characteristic tags in the first characteristic set are utilized to traverse the subsequent characteristic sets, then the unmatched tags are subjected to next round of matching, the personnel tag set of the other part is obtained, the integrity of the personnel tag sets is guaranteed, and one person is not missed.
Example 8
On the basis of embodiment 3, the intelligent monitoring and outbound system based on big data, the video fusion unit includes:
the first fusion subunit is used for acquiring the video angle between the first shooting unit and the second shooting unit and establishing a projection domain with the same angle according to the video angle;
and the second fusion subunit is used for inputting the first angle video stream and the second angle video stream into the projection domain for video projection, and carrying out video correction on a projection result to obtain the three-dimensional video stream of the target area.
In this example, the process of video correction of the projection result includes: marking preset points of the areas on the first angle video stream and the second angle video stream respectively, and searching the preset points of the first projection area and the preset points of the second projection area in the projection result; adjusting the preset points of the first projection area and the preset points of the second projection area to the same three-dimensional image position to obtain a first adjustment vector corresponding to the preset points of the first projection area and a second adjustment vector corresponding to the preset points of the second projection area; and correcting a first projection corresponding to the first angle video stream based on the first adjustment vector, and correcting a second projection corresponding to the second angle video stream based on the second adjustment vector, so as to finish video correction.
The working principle of the technical scheme has the beneficial effects that: the projection domain is established according to the shooting angle between the two shooting units, video projection is carried out, then the projection result is corrected, a three-dimensional video stream of the target area can be generated, and a basis is provided for subsequent personnel behavior analysis.
Example 9
The embodiment provides an intelligent monitoring and outbound method based on big data, as shown in fig. 3, including:
step 1: collecting a three-dimensional video stream of a target area;
step 2: analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream;
step 3: acquiring a personnel track corresponding to each personnel, comparing the personnel track with a sample track in a database, and determining personnel behaviors corresponding to each personnel in different time periods;
step 4: and when the personnel behaviors belong to illegal behaviors, controlling a calling device closest to the corresponding personnel to carry out calling reminding.
In this example, the target area may be any area that the user wants to monitor;
in this example, the person track represents a track generated when a corresponding person moves in a target area, and one target area may include a person track of one or more persons;
in this example, the database is derived from the network big data and updated in real time as the network big data is updated;
in this example, the violations include both violations and bad behaviors, where the violations represent the corresponding person's behavior to violate the law, such as: robbery, fire, etc., while bad behavior indicates that the corresponding person initially violates ethical behavior, such as: spit anywhere, turn over the railing, etc.;
in the example, the function of the call reminding is to remind the offender not to continue the violation on one hand, and on the other hand, surrounding personnel can be reminded to timely prevent the behavior of the offender.
The working principle of the technical scheme has the beneficial effects that: in order to realize real-time monitoring and outbound reminding, the personnel track corresponding to each personnel in the target area is determined by collecting and analyzing the three-dimensional video stream of the target area, and then the personnel behavior of each personnel is obtained by comparing the three-dimensional video stream with the sample track in big data.
Example 10
Based on embodiment 9, the intelligent monitoring and outbound method based on big data, the step 3, includes:
step 31: respectively acquiring a corresponding advancing route of each personnel track, and classifying the personnel tracks according to the end points of the advancing routes to obtain a plurality of track classes;
step 32: searching all sample tracks corresponding to each track class in the database, respectively comparing each human track in the same track class with the corresponding sample track, and obtaining track similarity between each human track and the sample track according to a comparison result;
step 33: according to the similarity between the personnel track and the different sample tracks, combining the sample behaviors corresponding to each sample track to carry out sample mutual adaptation training on a preset number of sample behaviors, and obtaining the personnel behaviors of corresponding personnel in different time periods.
In this example, the forward route represents the forward direction of the person and the forward path;
in this example, the end of the forward path represents the path taken by the person within the target area, such as: person a walks a first path in the target area and person b and person c walk a second path in the target area;
in this example, the track class represents the person track corresponding to each path;
in this example, the sample trajectory represents a non-offending trajectory as it progresses through a path;
in this example, the trajectory similarity represents the degree of coincidence between the person trajectory and the sample trajectory;
in this example, the preset number may be 100.
The working principle of the technical scheme has the beneficial effects that: the method comprises the steps of determining the advancing end point of a person by analyzing the person track, classifying the person track with consistent end points into a plurality of track types, comparing samples according to each track type, obtaining the similarity between the person track and each sample track according to a comparison result, performing mutual adaptation training, obtaining the behaviors of the person in different time periods, and analyzing the positions of the person because different facilities are arranged at different positions in a target area, wherein the same behavior of the person at different positions does not necessarily have the same behavior property.
Example 11
Based on embodiment 10, the intelligent monitoring and outbound method based on big data, the step 31, includes:
step 311: respectively acquiring a motion vector corresponding to each frame of three-dimensional image in the three-dimensional video stream, and calculating a path angle between each human track and a preset direction according to a formula (1);
wherein alpha is k Represents the angular path between the kth personnel trajectory and the preset direction, n represents the number of three-dimensional images in the three-dimensional video stream, x b X-direction, y representing a motion vector in a predetermined direction b Representing the y direction, x of a preset directional motion vector ki Representing x direction, y of motion vector of k-frame three-dimensional image ki A y direction representing a motion vector of a k-th frame three-dimensional image;
step 312: establishing a forward direction corresponding to each personnel track according to an angle path between the personnel track and a preset direction, and establishing a forward route corresponding to the personnel track;
step 313: and respectively acquiring the end points of each advancing route, classifying the personnel tracks with consistent end points, and obtaining a plurality of track classes.
In this instance, the preset direction may be a direction perpendicular to the target area.
The working principle of the technical scheme has the beneficial effects that: the path angle between the personnel track and the preset direction is determined by analyzing the motion vector corresponding to each three-dimensional image in the three-dimensional video stream, and then the track classification is carried out according to the end point of each advancing route, so that the follow-up analysis of the personnel behaviors one by one is facilitated.
It will be apparent to those skilled in the art that various modifications and variations can be made to the present invention without departing from the spirit or scope of the invention. Thus, it is intended that the present invention also include such modifications and alterations insofar as they come within the scope of the appended claims or the equivalents thereof.

Claims (10)

1. An intelligent monitoring and outbound system based on big data, comprising:
the acquisition module is used for acquiring a three-dimensional video stream of the target area;
the analysis module is used for analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream, and transmitting the personnel track corresponding to each personnel to the analysis module;
the analysis module is used for comparing the personnel track with the sample track in the database and determining the personnel behaviors of each personnel in different time periods;
and the outbound module is used for controlling a calling device closest to the corresponding person to carry out calling reminding when the person behavior belongs to the illegal behavior.
2. The intelligent monitoring and outbound system based on big data of claim 1 wherein:
the number of the calling devices is multiple, each calling device is connected with the outbound module, and the calling devices are respectively arranged at different area positions of the target area.
3. The intelligent monitoring and outbound system based on big data of claim 1, wherein the acquisition module comprises:
the first shooting unit is used for shooting a first angle video stream of the target area;
the second shooting unit is used for shooting a second angle video stream of the target area;
and the video fusion unit is used for fusing the first angle video stream and the second angle video stream to obtain the three-dimensional video stream of the target area.
4. The intelligent monitoring and outbound system based on big data of claim 1, wherein the parsing module comprises:
the video framing unit is used for dividing the three-dimensional video stream into a plurality of frames of three-dimensional images to obtain an image sequence;
the image analysis unit is used for traversing each frame of three-dimensional image by utilizing preset character outlines respectively to obtain a plurality of personnel outlines on each frame of three-dimensional image, acquiring outline features corresponding to each personnel outline, respectively establishing feature labels for each personnel outline to obtain feature sets corresponding to each three-dimensional image, and sequencing the feature sets according to the sequence of the three-dimensional images in the image sequence to obtain a feature set sequence;
the personnel analysis unit is used for inputting the feature set sequence into the cyclic extraction model, obtaining similar feature labels in different feature sets and obtaining a personnel label set;
and the track determining unit is used for searching the corresponding personnel position change in the three-dimensional video stream according to the personnel tag set and establishing a personnel track.
5. The intelligent monitoring and outbound system based on big data of claim 1, wherein the analysis module comprises:
the track classification unit is used for respectively acquiring the advancing routes corresponding to each personnel track, classifying the personnel tracks according to the end points of the advancing routes and obtaining a plurality of track classes;
the sample comparison unit is used for searching all sample tracks corresponding to each track class in the database respectively, comparing each human track in the same track class with the corresponding sample track, and obtaining the track similarity between each human track and the sample track according to the comparison result;
and the behavior confirmation unit is used for carrying out sample mutual adaptation training on a preset number of sample behaviors by combining the sample behaviors corresponding to each sample track according to the similarity between the personnel track and the different sample tracks, so as to obtain the personnel behaviors of the corresponding personnel in different time periods.
6. The intelligent monitoring and outbound system based on big data of claim 1, wherein the outbound module comprises:
the behavior judging unit is used for analyzing the behaviors of the personnel to obtain behavior key points of corresponding personnel, and judging whether the behaviors of the personnel belong to illegal behaviors or not according to the behavior key points;
the behavior deepening unit is used for acquiring a violation time period corresponding to the violation when the personnel behavior belongs to the violation, acquiring a violation three-dimensional video corresponding to the violation time period, acquiring a violation image of a corresponding personnel in the violation three-dimensional video, and analyzing the violation image to obtain the violation type of the corresponding personnel;
and the call reminding unit is used for matching corresponding reminding voice according to the violation type and controlling a calling device closest to the corresponding person to carry out call reminding.
7. The intelligent monitoring and outbound system based on big data according to claim 4, wherein the personnel analysis unit comprises:
the first analysis subunit is used for inputting the feature set sequence into a cyclic extraction model and respectively counting the first quantity corresponding to the feature labels in each feature set;
the second analysis subunit is used for acquiring a first feature set in the feature set sequence, extracting a first cyclic feature tag from the first feature set, traversing a second feature set by using the first cyclic feature tag, acquiring a first similar tag with the highest matching degree with the first cyclic feature tag from the second feature set, performing cyclic matching to obtain a plurality of similar tags, and establishing a personnel tag set;
and the third analysis subunit is used for marking the matched labels in each feature set respectively to obtain the second number of the matched labels in each feature set, judging whether the first number and the second number corresponding to each feature set are consistent or not, if not, acquiring the non-matched labels in each feature set, and circularly matching the non-matched labels to establish a personnel label set.
8. The intelligent monitoring and outbound system based on big data as claimed in claim 3, wherein the video fusion unit comprises:
the first fusion subunit is used for acquiring the video angle between the first shooting unit and the second shooting unit and establishing a projection domain with the same angle according to the video angle;
and the second fusion subunit is used for inputting the first angle video stream and the second angle video stream into the projection domain for video projection, and carrying out video correction on a projection result to obtain the three-dimensional video stream of the target area.
9. An intelligent monitoring and outbound method based on big data is characterized by comprising the following steps:
step 1: collecting a three-dimensional video stream of a target area;
step 2: analyzing the three-dimensional video stream to obtain a personnel track corresponding to each personnel in the three-dimensional video stream;
step 3: acquiring a personnel track corresponding to each personnel, comparing the personnel track with a sample track in a database, and determining personnel behaviors corresponding to each personnel in different time periods;
step 4: and when the personnel behaviors belong to illegal behaviors, controlling a calling device closest to the corresponding personnel to carry out calling reminding.
10. The intelligent monitoring and outbound method based on big data as claimed in claim 9, wherein said step 3 comprises:
step 31: respectively acquiring a corresponding advancing route of each personnel track, and classifying the personnel tracks according to the end points of the advancing routes to obtain a plurality of track classes;
step 32: searching all sample tracks corresponding to each track class in the database, respectively comparing each human track in the same track class with the corresponding sample track, and obtaining track similarity between each human track and the sample track according to a comparison result;
step 33: according to the similarity between the personnel track and the different sample tracks, combining the sample behaviors corresponding to each sample track to carry out sample mutual adaptation training on a preset number of sample behaviors, and obtaining the personnel behaviors of corresponding personnel in different time periods.
CN202310529418.0A 2023-05-11 2023-05-11 Intelligent monitoring and outbound system and method based on big data Pending CN116612594A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310529418.0A CN116612594A (en) 2023-05-11 2023-05-11 Intelligent monitoring and outbound system and method based on big data

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310529418.0A CN116612594A (en) 2023-05-11 2023-05-11 Intelligent monitoring and outbound system and method based on big data

Publications (1)

Publication Number Publication Date
CN116612594A true CN116612594A (en) 2023-08-18

Family

ID=87673999

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310529418.0A Pending CN116612594A (en) 2023-05-11 2023-05-11 Intelligent monitoring and outbound system and method based on big data

Country Status (1)

Country Link
CN (1) CN116612594A (en)

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101079171A (en) * 2007-06-01 2007-11-28 北京汇大通业科技有限公司 Intelligent video monitoring system of bank self-aid apparatus
CN101266710A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device
CN107613410A (en) * 2017-09-14 2018-01-19 国家电网公司 A kind of video abstraction generating method being applied in power transformation monitor video
CN107657637A (en) * 2017-09-25 2018-02-02 中国农业大学 A kind of agricultural machinery working area acquisition methods
CN108282606A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Panorama mosaic method and its equipment
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
CN110968791A (en) * 2019-12-20 2020-04-07 贵阳货车帮科技有限公司 Data processing method, device and equipment for goods source route and storage medium
CN112288984A (en) * 2020-04-01 2021-01-29 刘禹岐 Three-dimensional visual unattended substation intelligent linkage system based on video fusion
CN113658394A (en) * 2021-08-12 2021-11-16 中冶京诚工程技术有限公司 River channel monitoring method and device
CN113989784A (en) * 2021-11-30 2022-01-28 福州大学 Road scene type identification method and system based on vehicle-mounted laser point cloud

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101266710A (en) * 2007-03-14 2008-09-17 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
CN101079171A (en) * 2007-06-01 2007-11-28 北京汇大通业科技有限公司 Intelligent video monitoring system of bank self-aid apparatus
CN105979203A (en) * 2016-04-29 2016-09-28 中国石油大学(北京) Multi-camera cooperative monitoring method and device
CN108282606A (en) * 2017-01-05 2018-07-13 浙江舜宇智能光学技术有限公司 Panorama mosaic method and its equipment
CN107613410A (en) * 2017-09-14 2018-01-19 国家电网公司 A kind of video abstraction generating method being applied in power transformation monitor video
CN107657637A (en) * 2017-09-25 2018-02-02 中国农业大学 A kind of agricultural machinery working area acquisition methods
CN109190508A (en) * 2018-08-13 2019-01-11 南京财经大学 A kind of multi-cam data fusion method based on space coordinates
CN110968791A (en) * 2019-12-20 2020-04-07 贵阳货车帮科技有限公司 Data processing method, device and equipment for goods source route and storage medium
CN112288984A (en) * 2020-04-01 2021-01-29 刘禹岐 Three-dimensional visual unattended substation intelligent linkage system based on video fusion
CN113658394A (en) * 2021-08-12 2021-11-16 中冶京诚工程技术有限公司 River channel monitoring method and device
CN113989784A (en) * 2021-11-30 2022-01-28 福州大学 Road scene type identification method and system based on vehicle-mounted laser point cloud

Similar Documents

Publication Publication Date Title
CN106710007B (en) A kind of quick ticket checking ticket checking method and its system based on real name ticket checking
CN108009473A (en) Based on goal behavior attribute video structural processing method, system and storage device
CN108053427A (en) A kind of modified multi-object tracking method, system and device based on KCF and Kalman
CN108062349A (en) Video frequency monitoring method and system based on video structural data and deep learning
CN110717414A (en) Target detection tracking method, device and equipment
CN111914636B (en) Method and device for detecting whether pedestrian wears safety helmet
CN108319926A (en) A kind of the safety cap wearing detecting system and detection method of building-site
CN206672041U (en) A kind of mining equipment intelligent patrol detection based on augmented reality and safeguard accessory system
CN107992786A (en) A kind of people streams in public places amount statistical method and system based on face
CN110516636A (en) A kind of monitoring method of process, device, computer equipment and storage medium
CN109711370A (en) A kind of data anastomosing algorithm based on WIFI detection and face cluster
CN104361327A (en) Pedestrian detection method and system
CN108205661A (en) A kind of ATM abnormal human face detection based on deep learning
CN106845432A (en) The method and apparatus that a kind of face is detected jointly with human body
CN111680637A (en) Mask detection method and detection system based on deep learning and image recognition technology
CN109389016B (en) Method and system for counting human heads
CN113807240A (en) Intelligent transformer substation personnel dressing monitoring method based on uncooperative face recognition
CN111460985A (en) On-site worker track statistical method and system based on cross-camera human body matching
CN110516600A (en) A kind of bus passenger flow detection method based on Face datection
CN115841651B (en) Constructor intelligent monitoring system based on computer vision and deep learning
CN106570440A (en) People counting method and people counting device based on image analysis
CN115223204A (en) Method, device, equipment and storage medium for detecting illegal wearing of personnel
CN111222420A (en) FTP protocol-based low-bandwidth-requirement helmet identification method
CN202121706U (en) Intelligent personnel monitoring system
CN116612594A (en) Intelligent monitoring and outbound system and method based on big data

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination