CN117423066B - Target person identification method based on multi-source data fusion analysis - Google Patents

Target person identification method based on multi-source data fusion analysis Download PDF

Info

Publication number
CN117423066B
CN117423066B CN202311724702.XA CN202311724702A CN117423066B CN 117423066 B CN117423066 B CN 117423066B CN 202311724702 A CN202311724702 A CN 202311724702A CN 117423066 B CN117423066 B CN 117423066B
Authority
CN
China
Prior art keywords
personnel
elevator
information
person
confirmed
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202311724702.XA
Other languages
Chinese (zh)
Other versions
CN117423066A (en
Inventor
张秀才
郝纯
蒋先勇
薛方俊
李志刚
魏长江
李财
胡晓晨
税强
曹尔成
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Sanside Technology Co ltd
Original Assignee
Sichuan Sanside Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Sanside Technology Co ltd filed Critical Sichuan Sanside Technology Co ltd
Priority to CN202311724702.XA priority Critical patent/CN117423066B/en
Publication of CN117423066A publication Critical patent/CN117423066A/en
Application granted granted Critical
Publication of CN117423066B publication Critical patent/CN117423066B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/77Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
    • G06V10/80Fusion, i.e. combining data from various sources at the sensor level, preprocessing level, feature extraction level or classification level
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30196Human being; Person
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Human Computer Interaction (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target personnel identification method based on multi-source data fusion analysis, which is used for extracting personnel information, time points entering an elevator and corresponding floor information from elevator monitoring video streams in a residential unit building; determining a person to be confirmed according to the personnel information, acquiring monitoring video streams of all places outside a unit building in a cell, and respectively intercepting monitoring video fragments of the person to be confirmed in a preset time period before and after a time point of entering an elevator from the monitoring video streams of all places outside the building; recording the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and generating the road track information of the person to be confirmed; the personnel floor information and the road track information of the same personnel to be confirmed are fused to obtain the action route of the personnel to be confirmed; matching with a preset normal action route library, and judging the person as a target person if the person is not matched with the preset normal action route library.

Description

Target person identification method based on multi-source data fusion analysis
Technical Field
The invention relates to the technical field of social intelligent management, in particular to a target person identification method based on multi-source data fusion analysis.
Background
For each cell, many people get in and out every day and have complicated personnel, besides the resident households of the property, many extraneous people, such as tenants, resident households of the property, visit, express take-out, sell and see the house, and the like, many of which are normally visiting the cell, and cannot threaten the safety of the cell, but some target people are likely to pose a potential threat to the safety of the resident households of the property, so it is necessary to discover the target people in advance and conduct management in time.
Disclosure of Invention
The invention aims to provide a target person identification method based on multi-source data fusion analysis, which is characterized in that floor information corresponding to each person is estimated through elevator monitoring video streaming, and road tracks of the person at all positions in a community outside a building are spliced with corresponding floors to obtain a complete path containing the floor information, so that target persons are screened out, the target persons are found and managed in advance, and the community safety is guaranteed.
On the one hand, the application provides a target person identification method based on multi-source data fusion analysis, which specifically comprises the following steps:
s1, acquiring discretized personnel floor information, wherein the personnel floor information comprises personnel information, time points entering an elevator and corresponding floor information; the personnel floor information is extracted from elevator monitoring video streams in the residential unit buildings;
s2, determining a person to be confirmed according to personnel information, acquiring monitoring video streams of all positions outside a unit building in a cell, and respectively intercepting monitoring video fragments of the person to be confirmed in a preset time period before and after a time point of entering an elevator from the monitoring video streams of all positions outside the building;
s3, recording the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and generating road track information of the person to be confirmed;
s4, merging the personnel floor information and the road track information of the same personnel to be confirmed to obtain the action route of the personnel to be confirmed;
s5, matching the action route of the person to be confirmed with a preset normal action route library, and judging the person to be a target person if the person to be confirmed is not in the action route library.
In a specific embodiment, the method for obtaining discretized personnel floor information specifically comprises:
s11, acquiring monitoring information uploaded by elevator monitoring equipment of each building unit of a community, wherein the monitoring information comprises an elevator monitoring video stream, an elevator monitoring equipment number and an elevator monitoring position;
s12, acquiring an elevator door opening signal detected by an elevator sensor probe; taking the time points of uploading the two adjacent elevator door opening signals as a starting point and a terminal point of interception, and intercepting elevator monitoring video clips from an elevator monitoring video stream;
s13, capturing video frame pictures from the elevator monitoring video clips, and recording the capturing time point of each video frame picture;
s14, inputting video frame pictures into a preset multitask detection model, and detecting a personnel area and an elevator key panel area in each video frame picture to form a detection picture set;
s15, analyzing the detection picture set to obtain personnel information, time points entering the elevator and corresponding floor information.
In a specific embodiment, the analysis process of step S15 specifically includes:
s151, respectively carrying out personnel identification and marking on personnel areas of video frame pictures in the current detection picture set to obtain personnel information, inquiring video frame pictures containing the same personnel through the personnel information to form a floor analysis picture set, and taking the interception time point of the video frame pictures when the personnel enter the elevator as the time point of entering the elevator;
s152, comparing elevator key panel areas of two continuous video frame pictures in the floor analysis picture set, judging whether the elevator key panel areas are the same, if so, intercepting a difference picture, and identifying the difference picture to obtain first floor information;
and S153, storing the first floor information, personnel information, time point of entering the elevator, elevator monitoring equipment number and position of the same personnel as one piece of data into the discretized personnel floor information.
In a specific embodiment, in step S152, when the change of the elevator key panel area is compared, the elevator key panel area is expanded, the personnel area and the elevator key panel expansion area in the video frame picture before the change of the elevator key panel area are identified, personnel performing key operation are detected, and personnel information performing key operation is obtained; the first floor information is associated with the personnel information.
In a specific embodiment, the specific process of capturing the difference picture in step S152 is:
dividing the elevator key panel area into a plurality of grid blocks according to keys, and recording each grid block
RGB values;
comparing whether RGB values of grid blocks of elevator key panel areas of two adjacent video frame pictures are the same or not;
if the RGB values of the grid blocks of the two pictures are detected to be different, the grid blocks with different RGB values are intercepted to be used as difference pictures.
In a specific embodiment, the specific process of generating the road track information of the person to be confirmed is:
s31, summarizing the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and arranging the position information according to the time sequence to obtain a road track list outside the cell building of the person to be confirmed;
s32, traversing the road track list in sequence, and marking a first label on the piece of data when traversing to the position information which is the starting point of the action route;
s33, continuing to traverse downwards, and marking the data with a second label when traversing to the end point of the same action route as the position information:
s34, reading all data contents between the first label and the second label data, and forming road track information according to time sequence;
s35, repeating the steps S32-S34 until the road track list is traversed.
In a specific embodiment, the specific process of generating the action route of the person in step S4 includes:
and counting the road track information and the personnel floor information of the personnel to be confirmed in one day, and splicing the floor information corresponding to the personnel to be confirmed with the road track information according to the time point of entering the elevator to obtain an action route.
It can be understood that the action track generated by resident households in a cell every day is fixed, the cell building reached by the resident household is a specific floor of a fixed building, the action track of a tenant or a visitor with reservation is also a fixed route, the action track of an express or takeaway personnel is also very fixed, the resident can generally scurry in a community, the resident is likely to enter and exit in a plurality of cells, each floor of the plurality of buildings in the cell is likely to tread on, the action route is not fixed, and the route is likely to be planned by monitoring dead angles in the process of stepping on the points, so that the action route of the resident is obviously different for the resident, the action route of the resident is analyzed, and the resident of the resident can be searched.
In the prior art, the identification of target personnel in a cell generally obtains a moving route of the personnel in the cell (without a unit floor) by continuously tracking the personnel entering the cell, and comprehensively judges whether the face of the personnel is blocked or whether the movement is suspicious, the method generally needs to process mass data to mark out special personnel, and then continuously analyzes the special personnel, namely continuously obtains real-time video information of the special personnel, wherein each personnel entering the cell needs to be analyzed to screen out the special personnel, and the path information does not contain floor information in a building. Through analysis of target personnel, the target personnel can directly enter a building to take an elevator to sweep the building from high to low to step on each floor in sequence, and some staff, such as express post delivery personnel, property personnel, security personnel and the like, generally cannot enter the building floors.
Therefore, people entering the elevator can be determined as the people to be confirmed to be analyzed through analyzing the elevator monitoring video, so that the analyzed data volume can be reduced, in addition, the floor information of the people entering the elevator can be predicted through the monitoring video in the elevator, discretization processing is carried out on the elevator monitoring video stream, the floor information of the inner path corresponding to each person is predicted through shooting and the specific action characteristics of the people, the information monitored at each place in a cell outside the building is obtained through the time point when the people enter the elevator, the people are continuously analyzed, the road track of the people outside the building in the cell is obtained, the road track of the people is spliced with the corresponding floors, a complete path containing the floor information is obtained, and the complete path is matched, so that whether the people are target people is judged.
The invention has the beneficial effects that:
according to the method, the personnel entering the elevator are determined to be the personnel to be confirmed to be analyzed through analyzing the elevator monitoring video, so that the analyzed data volume can be reduced, in addition, the floor information of the personnel entering the elevator can be predicted through the monitoring video in the elevator, discretization processing is carried out on the elevator monitoring video stream, the floor information corresponding to each personnel is estimated, the information monitored everywhere in the cell outside the building is obtained through the time point when the personnel enter the elevator, continuous analysis is carried out on the personnel, the road track of the personnel outside the building in the cell is obtained, the road track of the personnel is spliced with the corresponding floors, a complete path containing the floor information is obtained, the complete path is matched, and therefore target personnel are screened out, the target personnel are found in advance to manage and control the target personnel, and the safety of the cell is guaranteed.
Drawings
FIG. 1 is a flow chart of a target person identification method based on multi-source data fusion analysis;
FIG. 2 is a schematic diagram illustrating generation of a floor analysis photo set according to an embodiment of the present invention;
FIG. 3 is a schematic diagram illustrating the identification region division of a video frame picture according to an embodiment of the present invention;
FIG. 4 is a schematic diagram of a path of a person's actions in an embodiment of the invention.
Detailed Description
The following description of the embodiments of the present invention will be made clearly and completely with reference to the accompanying drawings, in which it is apparent that the embodiments described are only some embodiments of the present invention, but not all embodiments. The following description of at least one exemplary embodiment is merely exemplary in nature and is in no way intended to limit the invention, its application, or uses. All other embodiments, which can be made by those skilled in the art based on the embodiments of the invention without making any inventive effort, are intended to be within the scope of the invention.
The relative arrangement of the components and steps, numerical expressions and numerical values set forth in these embodiments do not limit the scope of the present invention unless it is specifically stated otherwise.
Meanwhile, it should be understood that the sizes of the respective parts shown in the drawings are not drawn in actual scale for convenience of description.
In addition, descriptions of well-known structures, functions and configurations may be omitted for clarity and conciseness. Those of ordinary skill in the art will recognize that various changes and modifications of the examples described herein can be made without departing from the spirit and scope of the present disclosure.
Techniques, methods, and apparatus known to one of ordinary skill in the relevant art may not be discussed in detail, but should be considered part of the specification where appropriate.
In all examples shown and discussed herein, any specific values should be construed as merely illustrative, and not a limitation. Thus, other examples of the exemplary embodiments may have different values.
Example 1
As shown in fig. 1, the embodiment provides a target person identifying method based on multi-source data fusion analysis, which specifically includes the following steps:
s1, acquiring discretized personnel floor information, wherein the personnel floor information comprises personnel information, time points entering an elevator and corresponding floor information; the personnel floor information is extracted from elevator monitoring video streams in the residential unit buildings;
the method for obtaining the discretized personnel floor information specifically comprises the following steps:
s11, acquiring monitoring information uploaded by elevator monitoring equipment of each building unit of a community, wherein the monitoring information comprises an elevator monitoring video stream, an elevator monitoring equipment number and an elevator monitoring position;
monitoring equipment is generally deployed at each key position in a cell, such as a cell gate, a building unit building, an elevator, a cell road and the like, each monitoring has a unique identifier, and each monitored equipment number and installation position can represent a monitored space position;
s12, acquiring an elevator door opening signal detected by an elevator sensor probe; taking the time points of uploading the two adjacent elevator door opening signals as a starting point and a terminal point of interception, and intercepting elevator monitoring video clips from an elevator monitoring video stream;
s13, capturing video frame pictures from the elevator monitoring video clips, and recording the capturing time point of each video frame picture;
s14, inputting video frame pictures into a preset multitask detection model, detecting a personnel area, a floor display area and an elevator key panel area in each video frame picture as shown in FIG 3, and splicing the video frame pictures successively according to the interception time points to form a detection picture set; specifically, the elevator key panel area refers to only a floor button area, and the door opening and closing key does not belong to the elevator key panel area described in the application;
s15, analyzing the detection picture set to obtain personnel information, time points entering the elevator and corresponding floor information.
The step S15 analysis process specifically includes:
s151, respectively carrying out personnel identification and marking on personnel areas of video frame pictures in the current detection picture set to obtain personnel information, inquiring video frame pictures containing the same personnel through the personnel information to form a floor analysis picture set, and taking a time point of capturing the video frame pictures when the personnel enter the elevator as a time point of entering the elevator;
the method comprises the steps of analyzing personnel, identifying the personnel in a video stream, marking each personnel so as to track the personnel, and recording the initial time point of the personnel in a monitoring video stream, wherein the personnel information comprises: when at least two pieces of personnel information of people appearing in different video frame pictures are matched, the personnel in the two video frame pictures are judged to be the same person, so that continuous tracking of the personnel is realized, and the appearance time of the same person in different monitoring video streams and the corresponding monitoring position are summarized to describe the action track of the same person. As shown in fig. 2, when the person is marked again on the floor analysis picture set, only the marking information of the same person is reserved, specifically, the video frame picture obtained when the person enters the elevator is the video frame picture corresponding to the first time when the face can be detected. It should be noted that, the recognition and comparison of the face, the body shape and the dressing in the picture are common techniques for those skilled in the art, and are not repeated herein. In addition, it should be noted that, in the present application, face data collection of people appearing in the video stream is agreed by the parties and the collected faces are used for cell security analysis, which accords with the laws and regulations stipulated by the country about personal privacy to obtain and apply faces.
S152, comparing elevator key panel areas of two continuous video frame pictures in the floor analysis picture set, judging whether the elevator key panel areas are the same, if so, intercepting a difference picture, and identifying the difference picture to obtain first floor information;
in step S152, when the elevator key panel area is compared with the elevator key panel area change, the elevator key panel area is expanded, the personnel area and the elevator key panel expansion area in the video frame picture before the elevator key panel area change are identified, personnel performing key operation are detected, and personnel information performing key operation is obtained; the first floor information is associated with the personnel information.
The specific process of capturing the difference picture in step S152 is as follows:
dividing the elevator key panel area into a plurality of grid blocks according to keys, and recording each grid block
RGB values;
comparing whether RGB values of grid blocks of elevator key panel areas of two adjacent video frame pictures are the same or not;
if the RGB values of the grid blocks of the two pictures are detected to be different, the grid blocks with different RGB values are intercepted to be used as difference pictures.
More specifically, when a person takes an elevator to go downstairs, the person needs to press a key, that is, the floor information of the person is generated at the moment, so that when the first floor information is generated, whether the floor information identified in the difference picture is one floor or not is judged, if the floor information is one floor, the floor display area in the video frame picture corresponding to the difference picture is continuously identified, the floor information at present, that is, the floor information of the person is obtained, and the floor information is taken as the first floor information.
And S153, storing the first floor information, personnel information, time point of entering the elevator, elevator monitoring equipment number and position of the same personnel as one piece of data into a discretized personnel floor information set.
S2, determining a person to be confirmed according to personnel information, acquiring monitoring video streams of all positions outside a unit building in a cell, and respectively intercepting monitoring video fragments of the person to be confirmed in a preset time period before and after a time point of entering an elevator from the monitoring video streams of all positions outside the building;
in order to reduce the calculation amount, when determining the person to be confirmed, a person obviously not belonging to the target person can be eliminated, for example, the face characteristics of the person are matched with a face information base registered by a preset cell owner, if the face characteristics of the person are matched with the face information base, the person is not analyzed, if the face characteristics of the person are not matched with the face information base, the person is determined to be the person to be confirmed, the action route of the person is analyzed, and whether the person is the target person is further judged.
S3, recording the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and generating road track information of the person to be confirmed;
taking the position information of the person to be confirmed as a cell gate or a unit building as an endpoint of a route, and generating road track information of the person to be confirmed;
specifically, a piece of road track information may be composed of a district gate as a starting point and a unit building as an end point; or taking a unit building as a starting point, taking a cell gate as an end point, or taking the same unit building as a starting point and an end point, and taking different or the same cell gates as the starting point and the end point. It should be noted that, the entrance of a cell is understood herein to mean the entrance of each gate of a cell, for example, each gate of a cell and a garage entrance may be considered as a gate of a cell, and a building is a location that can be photographed before entering an elevator.
The specific process for generating the road track information of the person to be confirmed is as follows:
s31, summarizing the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and arranging the position information according to the time sequence to obtain a road track list outside the cell building of the person to be confirmed;
s32, traversing the road track list in sequence, and marking a first label on the piece of data when traversing to the position information which is the starting point of the action route;
s33, continuing to traverse downwards, and marking the data with a second label when traversing to the end point of the same action route as the position information:
s34, reading all data contents between the first label and the second label data, and forming road track information according to time sequence;
s35, repeating the steps S32-S34 until the road track list is traversed.
S4, merging the personnel floor information and the road track information of the same personnel to be confirmed to obtain the action route of the personnel to be confirmed;
the specific process of generating the action route of the person in the step S4 comprises the following steps:
counting the road track information and the personnel floor information of the personnel to be confirmed in one day, and splicing the floor information corresponding to the personnel to be confirmed with the road track information according to the time point of entering the elevator to obtain an action route;
and S5, matching the action route of the person to be confirmed in a preset normal action route library, and judging the person to be a target person if the action route is not in the action route library or the number of the action route matches exceeds a threshold value.
It can be understood that the preset normal action route library enumerates all possible routes of normal residents, including residents with different floors and buildings, the moving route of general residents is quite definite in purpose, the moving route of the residents in the cell is fixed, the general external personnel is quite strong in purpose, the residents generally directly enter corresponding cell building units, the building units and floor information of the residents are fixed, the starting point and the ending point of the routes in the cell are generally cell gates or cell floors, and only one corresponding cell building or floor exists. The target person may perform the stepping action, thus the target person may appear in a plurality of unit buildings, and the destination is not strong, so the target person may continuously enter a plurality of unit buildings after entering a cell, and perform the sweeping action, which may only have the information of entering a certain floor, and then appear in another elevator, so a plurality of continuous different unit floor information may appear, if the action route of a certain person is numerous and the action route capable of matching exceeds a certain threshold, for example, the action route of a plurality of unit buildings exists, the target person may be a target person.
Specifically, the general resident's action route is district gate-road junction-unit building or district gate-express delivery point-unit building, etc. or the action route of express delivery personnel is district gate-express delivery point-district gate, can see that some extraneous personnel like express delivery car, property personnel, security, generally can not get into the floor in the unit building, and the target personnel can step on the point in the unit building, especially the high floor, generally can sit to the top building, then avoid the control and sweep down the point of stepping on the floor layer by layer, consequently, from the personnel in the elevator surveillance video, obtain the action route of target analysis personnel, can reduce the exclusion to these personnel that can not carry out in the unit building, reduce the data calculated amount of analysis.
In order to better illustrate the implementation method, as shown in fig. 4, the mobile route map distribution of three types of people is exemplified, wherein the mobile route of the person 1 is a cell gate-express point-cell building 2-floor 2, the mobile route of the person 2 is a cell gate-express point-cell gate, the mobile route of the person 3 is a cell gate-cell building 1-floor 5-cell building 2-floor 5-cell building 3-floor 5-cell gate, and it can be seen that the mobile route of the person 1 and the person 2 is short, the purpose is strong, the person 2 does not enter the cell building, and no floor information exists, so that the system can be considered as a transacting person without analyzing the system, and only analyzing the person 1 and the person 3 is needed, so that the calculation amount is reduced. The person 3 has a plurality of action routes, and enters and exits from the plurality of unit buildings for a plurality of times, the action routes cannot be matched with a preset normal action route library, the person 3 is locked as a target person and is placed in a target person information library, and the action and the route of the person are focused when the person enters the cell next time.
It will be appreciated by those skilled in the art that embodiments of the present invention may be provided as a method, system, or computer program product. Accordingly, the present invention may take the form of an entirely hardware embodiment, an entirely software embodiment or an embodiment combining software and hardware aspects. Furthermore, the present invention may take the form of a computer program product embodied on one or more computer-usable storage media (including, but not limited to, disk storage, CD-ROM, optical storage, and the like) having computer-usable program code embodied therein.
The present invention is described with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the invention. It will be understood that each flow and/or block of the flowchart illustrations and/or block diagrams, and combinations of flows and/or blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer program instructions. These computer program instructions may be provided to a processor of a general purpose computer, special purpose computer, embedded processor, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be stored in a computer-readable memory that can direct a computer or other programmable data processing apparatus to function in a particular manner, such that the instructions stored in the computer-readable memory produce an article of manufacture including instruction means which implement the function specified in the flowchart flow or flows and/or block diagram block or blocks.
These computer program instructions may also be loaded onto a computer or other programmable data processing apparatus to cause a series of operational steps to be performed on the computer or other programmable apparatus to produce a computer implemented process such that the instructions which execute on the computer or other programmable apparatus provide steps for implementing the functions specified in the flowchart flow or flows and/or block diagram block or blocks.
The foregoing description of the preferred embodiment of the invention is not intended to limit the invention in any way, but rather to cover all modifications, equivalents, improvements and alternatives falling within the spirit and principles of the invention.

Claims (6)

1. The target person identification method based on the multi-source data fusion analysis is characterized by comprising the following steps of:
s1, acquiring discretized personnel floor information, wherein the personnel floor information comprises personnel information, time points entering an elevator and corresponding floor information; the personnel floor information is extracted from elevator monitoring video streams in the residential unit buildings;
the method for obtaining the discretized personnel floor information specifically comprises the following steps:
s11, acquiring monitoring information uploaded by elevator monitoring equipment of each building unit of a community, wherein the monitoring information comprises an elevator monitoring video stream, an elevator monitoring equipment number and an elevator monitoring position;
s12, acquiring an elevator door opening signal detected by an elevator sensor probe; taking the time points of uploading the two adjacent elevator door opening signals as a starting point and a terminal point of interception, and intercepting elevator monitoring video clips from an elevator monitoring video stream;
s13, capturing video frame pictures from the elevator monitoring video clips, and recording the capturing time point of each video frame picture;
s14, inputting video frame pictures into a preset multitask detection model, and detecting a personnel area and an elevator key panel area in each video frame picture to form a detection picture set;
s15, analyzing the detection picture set to obtain personnel information, a time point of entering an elevator and corresponding floor information;
s2, determining a person to be confirmed according to personnel information, acquiring monitoring video streams of all positions outside a unit building in a cell, and respectively intercepting monitoring video fragments of the person to be confirmed in a preset time period before and after a time point of entering an elevator from the monitoring video streams of all positions outside the building;
s3, recording the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and generating road track information of the person to be confirmed;
s4, merging the personnel floor information and the road track information of the same personnel to be confirmed to obtain the action route of the personnel to be confirmed;
s5, matching the action route of the person to be confirmed with a preset normal action route library, and judging the person to be a target person if the person to be confirmed is not in the action route library.
2. The method for identifying target personnel based on multi-source data fusion analysis according to claim 1, wherein the analysis process of step S15 specifically comprises:
s151, respectively carrying out personnel identification and marking on personnel areas of video frame pictures in the current detection picture set to obtain personnel information, inquiring video frame pictures containing the same personnel through the personnel information to form a floor analysis picture set, and taking the interception time point of the video frame pictures when the personnel enter the elevator as the time point of entering the elevator;
s152, comparing elevator key panel areas of two continuous video frame pictures in the floor analysis picture set, judging whether the elevator key panel areas are the same, if so, intercepting a difference picture, and identifying the difference picture to obtain first floor information;
and S153, storing the first floor information, personnel information, time point of entering the elevator, elevator monitoring equipment number and position of the same personnel as one piece of data into the discretized personnel floor information.
3. The method for identifying target personnel based on multi-source data fusion analysis according to claim 2, wherein in step S152, when the elevator key panel area is compared with the elevator key panel area change, the elevator key panel area is expanded, the personnel area and the elevator key panel expansion area in the video frame picture before the elevator key panel area change are identified, the personnel performing key operation are detected, and the personnel information performing key operation is obtained; the first floor information is associated with the personnel information.
4. The method for identifying target personnel based on multi-source data fusion analysis according to claim 2, wherein the specific process of capturing the difference picture in step S152 is as follows:
dividing an elevator key panel area into a plurality of grid blocks according to keys, and recording RGB values of the grid blocks;
comparing whether RGB values of grid blocks of elevator key panel areas of two adjacent video frame pictures are the same or not;
if the RGB values of the grid blocks of the two pictures are detected to be different, the grid blocks with different RGB values are intercepted to be used as difference pictures.
5. The method for identifying target personnel based on multi-source data fusion analysis according to claim 1, wherein the specific process of generating the road track information of the personnel to be confirmed is as follows:
s31, summarizing the time of the appearance of the person to be confirmed in each monitoring video segment and the position information of the person to be confirmed, and arranging the position information according to the time sequence to obtain a road track list outside the cell building of the person to be confirmed;
s32, traversing the road track list in sequence, and marking a first label on the piece of data when traversing to the position information which is the starting point of the action route;
s33, continuing to traverse downwards, and marking the data with a second label when traversing to the end point of the same action route as the position information:
s34, reading all data contents between the first label and the second label data, and forming road track information according to time sequence;
s35, repeating the steps S32-S34 until the road track list is traversed.
6. The method for identifying target personnel based on multi-source data fusion analysis according to claim 1, wherein the specific process of generating the action route of the personnel in step S4 comprises:
and counting the road track information and the personnel floor information of the personnel to be confirmed in one day, and splicing the floor information corresponding to the personnel to be confirmed with the road track information according to the time point of entering the elevator to obtain an action route.
CN202311724702.XA 2023-12-15 2023-12-15 Target person identification method based on multi-source data fusion analysis Active CN117423066B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311724702.XA CN117423066B (en) 2023-12-15 2023-12-15 Target person identification method based on multi-source data fusion analysis

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311724702.XA CN117423066B (en) 2023-12-15 2023-12-15 Target person identification method based on multi-source data fusion analysis

Publications (2)

Publication Number Publication Date
CN117423066A CN117423066A (en) 2024-01-19
CN117423066B true CN117423066B (en) 2024-02-27

Family

ID=89526927

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311724702.XA Active CN117423066B (en) 2023-12-15 2023-12-15 Target person identification method based on multi-source data fusion analysis

Country Status (1)

Country Link
CN (1) CN117423066B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117830911B (en) * 2024-03-06 2024-05-28 一脉通(深圳)智能科技有限公司 Intelligent identification method and device for intelligent camera, electronic equipment and medium

Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152328A (en) * 2006-12-14 2008-07-03 Hitachi Information & Control Solutions Ltd Suspicious person monitoring system
JP2010182287A (en) * 2008-07-17 2010-08-19 Steven C Kays Intelligent adaptive design
JP2014229068A (en) * 2013-05-22 2014-12-08 株式会社 日立産業制御ソリューションズ People counting device and person flow line analysis apparatus
CN109257569A (en) * 2018-10-24 2019-01-22 广东佳鸿达科技股份有限公司 Security protection video monitoring analysis method
CN109492548A (en) * 2018-10-24 2019-03-19 广东佳鸿达科技股份有限公司 The preparation method of region mask picture based on video analysis
CN111932585A (en) * 2020-07-28 2020-11-13 浙江新再灵科技股份有限公司 Intelligent elevator room trailing behavior identification method based on big data
CN112660953A (en) * 2019-10-16 2021-04-16 杭州海康威视系统技术有限公司 Detection method, equipment and system for abnormal behavior of elevator taking
CN113963399A (en) * 2021-09-09 2022-01-21 武汉众智数字技术有限公司 Personnel trajectory retrieval method and device based on multi-algorithm fusion application
CN114596684A (en) * 2022-01-10 2022-06-07 嘉兴琥珀科技有限公司 High-definition video monitoring method, monitoring device and system based on public safety
CN115439796A (en) * 2022-11-09 2022-12-06 江西省天轴通讯有限公司 Specific area personnel tracking and identifying method, system, electronic equipment and storage medium
WO2023175839A1 (en) * 2022-03-17 2023-09-21 三菱電機株式会社 Monitoring system, server, and monitoring method
CN117197726A (en) * 2023-11-07 2023-12-08 四川三思德科技有限公司 Important personnel accurate management and control system and method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020210504A1 (en) * 2019-04-09 2020-10-15 Avigilon Corporation Anomaly detection method, system and computer readable medium
CN112926514A (en) * 2021-03-26 2021-06-08 哈尔滨工业大学(威海) Multi-target detection and tracking method, system, storage medium and application

Patent Citations (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008152328A (en) * 2006-12-14 2008-07-03 Hitachi Information & Control Solutions Ltd Suspicious person monitoring system
JP2010182287A (en) * 2008-07-17 2010-08-19 Steven C Kays Intelligent adaptive design
JP2014229068A (en) * 2013-05-22 2014-12-08 株式会社 日立産業制御ソリューションズ People counting device and person flow line analysis apparatus
CN109257569A (en) * 2018-10-24 2019-01-22 广东佳鸿达科技股份有限公司 Security protection video monitoring analysis method
CN109492548A (en) * 2018-10-24 2019-03-19 广东佳鸿达科技股份有限公司 The preparation method of region mask picture based on video analysis
CN112660953A (en) * 2019-10-16 2021-04-16 杭州海康威视系统技术有限公司 Detection method, equipment and system for abnormal behavior of elevator taking
CN111932585A (en) * 2020-07-28 2020-11-13 浙江新再灵科技股份有限公司 Intelligent elevator room trailing behavior identification method based on big data
CN113963399A (en) * 2021-09-09 2022-01-21 武汉众智数字技术有限公司 Personnel trajectory retrieval method and device based on multi-algorithm fusion application
CN114596684A (en) * 2022-01-10 2022-06-07 嘉兴琥珀科技有限公司 High-definition video monitoring method, monitoring device and system based on public safety
WO2023175839A1 (en) * 2022-03-17 2023-09-21 三菱電機株式会社 Monitoring system, server, and monitoring method
CN115439796A (en) * 2022-11-09 2022-12-06 江西省天轴通讯有限公司 Specific area personnel tracking and identifying method, system, electronic equipment and storage medium
CN117197726A (en) * 2023-11-07 2023-12-08 四川三思德科技有限公司 Important personnel accurate management and control system and method

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
"Intelligent Video Surveillance of Tourist Attractions Based on Virtual Reality Technology";J. Huang等;《IEEE Access》;20200831;第8卷;第159220-159233页 *
"校园智能视频监控系统的研究与设计";陈锦勇;《中国优秀硕士学位论文全文数据库 信息科技辑》;20140915(第9期);I136-419 *
"STAM-CCF: Suspicious Tracking Across Multiple Camera Based on Correlation Filters";Sheu R-K等;《Sensors》;20190709;第19卷(第13期);第1-22页 *
"基于视频的电梯轿厢内乘客异常行为检测研究";马志伟;《中国优秀硕士学位论文全文数据库 工程科技Ⅱ辑》;20190515(第5期);C038-993 *

Also Published As

Publication number Publication date
CN117423066A (en) 2024-01-19

Similar Documents

Publication Publication Date Title
CN117423066B (en) Target person identification method based on multi-source data fusion analysis
US8149278B2 (en) System and method for modeling movement of objects using probabilistic graphs obtained from surveillance data
JP4937016B2 (en) Monitoring device, monitoring method and program
CN112116503A (en) Smart community cloud platform management system
Alshammari et al. Intelligent multi-camera video surveillance system for smart city applications
US20080130949A1 (en) Surveillance System and Method for Tracking and Identifying Objects in Environments
TW200903386A (en) Target detection and tracking from video streams
CN112116502A (en) Smart community security management system
KR100968433B1 (en) Store system for the license plate images of vehicle and, search system for images of vehicle using that store system
CN110111565A (en) A kind of people's vehicle flowrate System and method for flowed down based on real-time video
CN111814510B (en) Method and device for detecting legacy host
EP2618288A1 (en) Monitoring system and method for video episode viewing and mining
EP1927947A1 (en) Computer implemented method and system for tracking objects using surveillance database
CN109446881B (en) Heterogeneous data-based highway section traffic state detection method
US20210042940A1 (en) Digital twin monitoring systems and methods
Martani et al. Pedestrian monitoring techniques for crowd-flow prediction
CN110544312A (en) Video display method and device in virtual scene, electronic equipment and storage device
JP2020129215A (en) Risk determination program and system
Zhang et al. An occupancy distribution estimation method using the surveillance cameras in buildings
KR102482545B1 (en) AI-based path prediction method
KR102464196B1 (en) Big data-based video surveillance system
CN115862296A (en) Fire risk early warning method, system, equipment and medium for railway construction site
JP6739115B1 (en) Risk judgment program and system
CN113762126A (en) Personnel entry and exit detection method, device, equipment and medium
CN112699843A (en) Identity recognition method and system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant