CN112733647A - Method, analysis server and system based on MAC address and face information binding - Google Patents

Method, analysis server and system based on MAC address and face information binding Download PDF

Info

Publication number
CN112733647A
CN112733647A CN202011609448.5A CN202011609448A CN112733647A CN 112733647 A CN112733647 A CN 112733647A CN 202011609448 A CN202011609448 A CN 202011609448A CN 112733647 A CN112733647 A CN 112733647A
Authority
CN
China
Prior art keywords
face information
mac address
target mac
information
acquisition unit
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011609448.5A
Other languages
Chinese (zh)
Inventor
侯剑飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhejiang Dahua Technology Co Ltd
Original Assignee
Zhejiang Dahua Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhejiang Dahua Technology Co Ltd filed Critical Zhejiang Dahua Technology Co Ltd
Priority to CN202011609448.5A priority Critical patent/CN112733647A/en
Publication of CN112733647A publication Critical patent/CN112733647A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04LTRANSMISSION OF DIGITAL INFORMATION, e.g. TELEGRAPHIC COMMUNICATION
    • H04L2101/00Indexing scheme associated with group H04L61/00
    • H04L2101/60Types of network addresses
    • H04L2101/618Details of network addresses
    • H04L2101/622Layer-2 addresses, e.g. medium access control [MAC] addresses

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Health & Medical Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Human Computer Interaction (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

The application discloses a method, an analysis server and a system based on MAC address and face information binding, wherein the method comprises the following steps: all the camera devices in the acquisition units responding to the current monitoring area acquire target MAC addresses at the same time point; acquiring the position information of the mobile equipment according to the signal intensity value of the mobile equipment corresponding to the target MAC address acquired by each camera device and the configuration information of the acquisition unit; determining analysis areas from video frames acquired by each camera device at the same time point according to the position information of the mobile equipment, and extracting face information from each analysis area; and determining face information uniquely corresponding to the target MAC address from a plurality of face information corresponding to the plurality of monitoring areas. By the method, the face information corresponding to the same target MAC address in multiple monitoring areas can be obtained and screened, and the accuracy and the success rate of binding the target MAC address and the face are improved.

Description

Method, analysis server and system based on MAC address and face information binding
Technical Field
The application relates to the technical field of security protection, in particular to a method, an analysis server and a system based on MAC address and face information binding.
Background
Mobile devices such as mobile phones and tablets have become necessary tools to be carried around in daily life, and the mobile devices have unique physical addresses (Media access control addresses, namely MAC addresses), and in the security field, face information and the MAC addresses can be bound to obtain MAC movement tracks of users, so that suspicious MAC track maps can be obtained conveniently. However, if the binding between the MAC address and the face information is not accurate enough, the solution will be affected negatively, so it is necessary to improve the accuracy and success rate of the binding between the MAC address and the face information.
Disclosure of Invention
The method, the analysis server and the system mainly solve the technical problem of providing the method, the analysis server and the system based on the binding of the MAC address and the face information, and can obtain and screen a plurality of pieces of face information corresponding to the same target MAC address in a plurality of monitoring areas, so that the accuracy and the success rate of the binding of the target MAC address and the face are improved.
In order to solve the above technical problem, a first aspect of the present application provides a method for binding a face information based on an MAC address, where the method includes: all the camera devices in the acquisition units responding to the current monitoring area acquire target MAC addresses at the same time point; acquiring the position information of the mobile equipment according to the signal intensity value of the mobile equipment corresponding to the target MAC address and the configuration information of the acquisition unit, which are acquired by each image pickup device; determining an analysis area from the video frame acquired by each camera device at the same time point according to the position information of the mobile equipment, and extracting face information from each analysis area; and determining face information uniquely corresponding to the target MAC address from a plurality of face information corresponding to a plurality of monitoring areas.
Before obtaining the location information of the mobile device according to the signal intensity value of the mobile device corresponding to the target MAC address and the configuration information of the acquisition unit, the method further includes: configuring a number for the acquisition unit; acquiring configuration information uploaded by the acquisition unit and a number corresponding to the acquisition unit, and binding the configuration information and the number; the configuration information at least comprises position information of all the camera devices in the acquisition unit, signal intensity values among the camera devices and range values of a monitoring area corresponding to the acquisition unit.
The obtaining the location information of the mobile device according to the signal intensity value of the mobile device corresponding to the target MAC address and the configuration information of the collecting unit, which are collected by each camera device, includes: acquiring the number of the acquisition unit acquiring the target MAC address, and searching and extracting the configuration information of the acquisition unit according to the number; acquiring the distance between each camera device and the mobile equipment by utilizing the signal intensity value of the mobile equipment corresponding to the target MAC address, the position information of all the camera devices and the signal intensity value between the camera devices, which are acquired by each camera device; and substituting the distance of each camera device relative to the mobile equipment into a preset positioning algorithm for operation to obtain the position information of the mobile equipment.
Wherein, the determining an analysis region from the video frames acquired by each camera at the same time point according to the position information of the mobile device and extracting the face information from each analysis region includes: obtaining an analysis area corresponding to the target MAC address in a video frame acquired by each camera device at the same time point by using the position information of the mobile equipment, the position information of the camera device and the range value of the monitoring area corresponding to the acquisition unit; and analyzing the image in the analysis area by using a face detection algorithm to acquire all face information which can be identified.
Wherein the determining of the face information uniquely corresponding to the target MAC address from the plurality of face information corresponding to the plurality of monitoring areas includes: acquiring face information which can be extracted by the same target MAC address in a plurality of acquisition units in a plurality of monitoring areas; and comparing the same face information in the plurality of acquisition units, and binding and storing the unique face information which is larger than a first threshold value and is larger than the first threshold value and the target MAC address when the same face information is larger than the first threshold value.
Wherein, the comparing the same face information in the plurality of the acquisition units, and when the same face information is greater than a first threshold, binding and storing the unique face information greater than the first threshold with the target MAC address includes: checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm; and according to the time point sequence of the same target MAC address appearing in the plurality of acquisition units, binding and storing the unique face information which reaches the first threshold value firstly in the same face information and the target MAC address.
Wherein, the comparing the same face information in the plurality of the acquisition units, and when the same face information is greater than a first threshold, binding and storing the unique face information greater than the first threshold with the target MAC address includes: checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm; and when the same face information exists and is greater than a first threshold value, judging whether the face information with the largest occurrence frequency in the same face information greater than the first threshold value is 1, if so, binding and storing the face information with the largest occurrence frequency and the MAC address, otherwise, returning to failure in acquiring the face information.
After the face information uniquely corresponding to the target MAC address is determined from the face information corresponding to the monitoring areas, the method further includes: and when the face information bound with the same MAC address is recognized to be changed, storing the original face information bound with the MAC address to a preset position, and further updating the bound face information for the MAC address.
In order to solve the above technical problem, a second aspect of the present application provides an analysis server, which includes a memory and a processor that are coupled to each other, where the memory stores program data, and the processor calls the program data to execute the method for binding based on a MAC address and face information according to the first aspect.
In order to solve the above technical problem, a third aspect of the present application provides an analysis system, including the analysis server of the second aspect and a collection unit, where the collection unit includes a video storage device and at least 3 image capture devices; the camera device is arranged at different positions of a monitoring area and used for detecting the MAC address of the mobile equipment and shooting a video frame, and the video storage device is used for storing the video frame uploaded by the camera device, receiving the serial number configured by the analysis server and uploading the configuration information and the serial number to the analysis server.
The beneficial effect of this application is: according to the method and the device, after all the camera devices in the acquisition unit of the monitoring area acquire the target MAC address at the same time point, the analysis area in the video frame acquired by each camera device at the same time point is acquired, and the face information is extracted from the analysis area, so that the multi-angle images at the analysis area are all analyzed, the face information of a single monitoring area is more comprehensive and accurate, the face information uniquely corresponding to the target MAC address is finally screened and determined from the face information corresponding to the monitoring areas, and the accuracy and the success rate of binding of the target MAC address and the face information are further improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the description of the embodiments are briefly introduced below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings without creative efforts. Wherein:
fig. 1 is a schematic flowchart of an embodiment of a method for binding human face information based on a MAC address according to the present application;
fig. 2 is a schematic flowchart of another embodiment of a method for binding human face information based on a MAC address according to the present application;
FIG. 3 is a flowchart illustrating an embodiment corresponding to step S204 in FIG. 2;
FIG. 4 is a schematic diagram of an application scenario of an embodiment corresponding to step S303 in FIG. 3;
FIG. 5 is a flowchart illustrating an embodiment corresponding to step S207 in FIG. 2;
FIG. 6 is a schematic flow chart illustrating another embodiment corresponding to step S207 in FIG. 2;
FIG. 7 is a schematic structural diagram of an embodiment of an analysis server provided in the present application;
fig. 8 is a schematic structural diagram of an embodiment of an analysis system provided in the present application.
Detailed Description
The technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present application.
The terms "system" and "network" are often used interchangeably herein. The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship. Further, the term "plurality" herein means two or more than two.
Referring to fig. 1, fig. 1 is a schematic flowchart illustrating an embodiment of a method for binding a MAC address and face information according to the present application, where the method includes:
step S101: and all the camera devices in the acquisition units responding to the current monitoring area acquire the target MAC address at the same time point.
Specifically, when a person carries the mobile device to enter a monitoring area, the camera device can read the MAC address of the mobile device through the built-in probe module, after a user enters the monitoring area, the camera device in the acquisition unit can acquire the same MAC address at the same time point, the MAC address acquired at the same time point is marked as a target MAC address, and then the target MAC address and the time point of acquiring the target MAC address are uploaded to the server. And when the server acquires that all the camera devices in the acquisition unit of the current monitoring area acquire the target MAC address at the same time point, the step S102 is executed.
Step S102: and acquiring the position information of the mobile equipment according to the signal intensity value of the mobile equipment corresponding to the target MAC address acquired by each camera device and the configuration information of the acquisition unit.
Specifically, in the stage of setting the acquisition unit, the acquisition unit at least includes 1 video storage device and 3 camera devices, and after the acquisition unit configuration is completed, the video storage device generates configuration information and sends the configuration information to the server, wherein the configuration information includes position information of each camera device in the monitoring area and a mutual Signal Strength value (RSSI) between the camera devices in the acquisition unit.
Further, after the target MAC address is obtained, the location information of the mobile device is determined by using the signal intensity value of the mobile device corresponding to the target MAC address, which is acquired by each camera device, based on the location information of each camera device in the monitoring area and by combining the signal intensity values of the camera devices.
In an application mode, the position information of the camera device is specifically coordinates of the camera device in a deployment range, after deployment of the acquisition unit is completed, deployment range values, monitoring range values and coordinates of the camera device in the deployment range of the acquisition unit are generated, and then the coordinates of all the camera devices in the acquisition unit and signal intensity values between the camera devices are sent to the server side.
Further, when the mobile equipment enters the monitoring area and the MAC address is acquired by the camera device in the acquisition unit at the same time point, the MAC address is used as a target MAC address, and the camera device acquires a signal intensity value I of the mobile equipment corresponding to the target MAC address and uploads the signal intensity value I to the server. After the server receives the target MAC address and the signal strength value of the mobile equipment corresponding to the target MAC address, which is acquired by each camera device, the server calculates the coordinates of the target MAC address in the deployment range by combining the coordinates of all the camera devices uploaded by the acquisition unit and the signal strength values among the camera devices.
Step S103: and determining analysis areas from video frames acquired by each camera at the same time point according to the position information of the mobile equipment, and extracting face information from each analysis area.
Specifically, the server side obtains video frames shot at a time point corresponding to the acquired target MAC address, positions an analysis area from images corresponding to the video frames according to the position information of the camera device and the position information of the mobile device, and obtains face information from the analysis area of each camera device by using a face detection algorithm to form a face information collection set of the target MAC address at the time point corresponding to the current acquisition unit.
Step S104: and determining face information uniquely corresponding to the target MAC address from a plurality of face information corresponding to the plurality of monitoring areas.
Specifically, a monitoring system generally includes a plurality of monitoring areas, such as stations, hotels, and other places, where the areas such as entrances, exits, halls, and aisles of the monitoring system are respectively configured with acquisition units to form corresponding monitoring areas. When a person carries a mobile device and passes through a plurality of monitoring areas in the monitoring system, a plurality of acquisition units can respectively extract a plurality of face information corresponding to the same target MAC address, face information corresponding to different acquisition units is sequentially compared by using a face comparison algorithm, the face information which has the largest occurrence frequency and is unique in the face information corresponding to different acquisition units is used as target face information, and then the target face information and the target MAC address are bound and stored.
Further, when the number of face information appearing most at the current time point is more than 1, whether the target MAC address enters any monitoring area in the monitoring system is further waited within 30 minutes, and then the face information corresponding to the target MAC address is obtained and compared. If the number of face information appearing most frequently in 30 minutes is reduced to 1, the face information appearing most frequently is bound with the target MAC address, and if the number of face information appearing most frequently exceeds 30 minutes, the target MAC address is prompted to fail in binding the face information.
In the method for binding the MAC address and the face information provided in this embodiment, after all the cameras in the acquisition unit of the monitoring area acquire the target MAC address at the same time point, the analysis area in the video frame acquired by each camera at the same time point is acquired and the face information is extracted from the analysis area, so that the images at multiple angles in the analysis area are all analyzed, the face information in a single monitoring area is more comprehensive and accurate, and finally, the face information uniquely corresponding to the target MAC address is screened and determined from the multiple pieces of face information corresponding to the multiple monitoring areas, thereby further improving the accuracy and the success rate of binding the target MAC address and the face information.
Referring to fig. 2, fig. 2 is a schematic flowchart illustrating another embodiment of a method for binding a MAC address and face information according to the present application, where the method includes:
step S201: and configuring a number for the acquisition unit.
Specifically, the acquisition unit at least comprises a video storage device and at least 3 camera devices, and when a brand-new acquisition unit is deployed or an original monitoring area is upgraded and modified, the server configures a unique number for the acquisition unit to distinguish the acquisition units of different monitoring areas.
Step S202: and acquiring the configuration information uploaded by the acquisition unit and the number corresponding to the acquisition unit, and binding the configuration information and the number.
Specifically, the acquisition unit registers the server after deployment is completed, and after the registration is passed, the acquisition unit uploads the configuration information and the number corresponding to the current acquisition unit to the server, and the server binds the configuration information and the number. The configuration information at least comprises position information of all the camera devices in the acquisition unit, signal intensity values among the camera devices and range values of the monitoring area corresponding to the acquisition unit.
In an application mode, the acquisition unit comprises 1 video storage device and 3 camera devices, wherein the 3 camera devices are respectively arranged at different positions of 3 positions, the server side configures numbers for the acquisition unit, and the video storage device receives the equipment numbers uploaded by the camera devices, the position information and the signal intensity values among the camera devices to generate the configuration information of the whole acquisition unit.
Specifically, please refer to table 1, where table 1 is a configuration information list of the acquisition unit, when 3 image capturing devices are set at their respective positions, the image capturing devices upload the device numbers, coordinate values, and acquired signal strength values corresponding to other image capturing devices to the video storage device, the video storage device writes data into the configuration information list, and writes deployment area range values and monitoring area range values into the configuration information list, and then the video storage device uploads the configuration information and the numbers corresponding to the acquisition unit to the server, and the server binds and stores the configuration information and the corresponding numbers after acquiring the configuration information and the numbers. In the acquisition unit deployment stage, the configuration information and the serial number are correspondingly stored, and the specific content in the configuration information can be quickly and accurately found according to the serial number when the configuration information is required to be used subsequently, so that repeated uploading is not required, and the searching is accurate and quick.
Table 1: configuration information list of acquisition unit
Serial number Configuration information
1 Deployment region range values
2 Monitoring area range value
3 Coordinate value of imaging device #1 in deployment area
4 Coordinate values of imaging device #2 in deployment area
5 Coordinate value of imaging device #3 in deployment area
6 Signal intensity values of image pickup devices #2 and #3 acquired by image pickup device #1
7 Signal intensity values of image pickup devices #1 and #3 acquired by image pickup device #2
8 Signal intensity values of image pickup devices #1 and #2 acquired by image pickup device #3
9 Number of imaging device #1
10 Number of imaging device #2
11 Number of imaging device #3
Step S203: and all the camera devices in the acquisition units responding to the current monitoring area acquire the target MAC address at the same time point.
Specifically, when a person carries a mobile device to enter any monitoring area in the monitoring system, the probe modules built in all the camera devices in the current acquisition unit can almost acquire the MAC address of the mobile device at the same time point.
Furthermore, the camera device collects the signal intensity value of the mobile device relative to the camera device except for the MAC address, the collection unit uploads the number of the current collection unit, the MAC address, the time point of collecting the MAC address, and the signal intensity value of the mobile device corresponding to the target MAC address collected by each camera device to the server, and the server receives the MAC address and then takes the MAC address as the target MAC address.
Step S204: and acquiring the position information of the mobile equipment according to the signal intensity value of the mobile equipment corresponding to the target MAC address acquired by each camera device and the configuration information of the acquisition unit.
Specifically, referring to fig. 3, fig. 3 is a schematic flowchart illustrating an embodiment corresponding to step S204 in fig. 2, where step S204 specifically includes:
step S301: and acquiring the number of the acquisition unit acquiring the target MAC address, and searching and extracting the configuration information of the acquisition unit according to the number.
Specifically, the server obtains the number corresponding to the acquisition unit acquiring the target MAC address, and then the server can accurately find the configuration information of the only corresponding acquisition unit according to the number, so that the server can subsequently and directly use all data in the configuration information.
Step S302: and obtaining the distance of each camera relative to the mobile equipment by using the signal intensity value of the mobile equipment corresponding to the target MAC address, the position information of all cameras and the signal intensity value among the cameras, which are acquired by each camera.
Specifically, the signal strength value can reflect the signal strength between the signal point and the receiving point, the distance between the signal point and the receiving point can be further determined according to the signal strength value, the server side obtains the position information of the camera device and the signal strength value between the camera devices from the configuration information, and the distance between each camera device and the mobile equipment is obtained by combining the signal strength value of the mobile equipment corresponding to the target MAC address, which is collected by each camera device, on the premise that the position of the camera device is fixed and the signal strength value between the camera devices is fixed.
Step S303: and substituting the distance of each camera relative to the mobile equipment into a preset positioning calculation method for calculation to obtain the position information of the mobile equipment.
Specifically, a circle is drawn by taking the center of each camera as the center of a circle and the distance between each camera and the mobile device as the radius, and the intersecting position of the circles is taken as the position of the mobile device.
In an application manner, please refer to fig. 4, where fig. 4 is an application scene schematic diagram of an implementation manner corresponding to step S303 in fig. 3, the acquisition unit includes 3 camera devices, each camera device includes a fixed coordinate, and the coordinate of the mobile device is calculated by using a triangulation algorithm.
Specifically, the centers of 3 cameras are taken as the circle center, the distance between each camera and the mobile device is taken as the radius to respectively draw a plurality of circles, the centers of all the intersected areas of the circles are taken as the positions of the mobile device, and the coordinates of the mobile device are obtained. Because each camera device may have a certain error relative to the distance of the mobile device, all circles cannot intersect at a point, and therefore, the center of the intersection area of all circles is taken as the position of the mobile device, and the error of the position of the mobile device caused by the error of the distance of each camera device relative to the mobile device can be reduced.
Step S205: and determining analysis areas from video frames acquired by each camera at the same time point according to the position information of the mobile equipment, and extracting face information from each analysis area.
Specifically, the analysis area corresponding to the target MAC address in the video frame acquired by each camera at the same time point is obtained by using the position information of the mobile device, the position information of the camera, and the range value of the monitoring area corresponding to the acquisition unit.
Furthermore, after the position information of the mobile equipment is determined, the server side extracts video frames shot by all the camera devices at the time point of obtaining the target MAC address, and the probability of incomplete face information snapshot caused by target overlapping is reduced through multi-azimuth snapshot. The video frame may be a key frame (I frame) before the time point of acquiring the target MAC address, or may be a video frame obtained after decoding the time point of acquiring the target MAC address. And the server determines the position of the mobile equipment on the video frame of each camera device according to the position information of each camera device, the range value of the monitoring area corresponding to the acquisition unit and the position information of the mobile equipment, and defines an analysis area on the video frame.
Specifically, when the analysis area on the video frame is determined, the analysis area can be flexibly changed according to the people flow in the current video frame, when the people flow is large, the analysis area is defined by taking the position of the mobile device as the center and taking the first numerical value as the radius, and when the people flow is small, the analysis area is defined by taking the position of the mobile device as the center and taking the second numerical value as the radius, wherein the first numerical value is larger than the second numerical value.
Further, the image in the analysis area is analyzed by using a face detection algorithm to obtain all face information which can be recognized.
Specifically, the face detection algorithm can recognize face information from an image through training and learning in advance. And sending the image in the analysis area as an interesting area into a face detection algorithm, further extracting complete face information which can be identified from the analysis area, and further completely extracting the face information on the video frames of corresponding target MAC addresses shot by all the camera devices in the current monitoring area to form a face information collection of the current monitoring area.
Step S206: the method comprises the steps of obtaining face information which can be extracted by the same target MAC address in a plurality of acquisition units of a plurality of monitoring areas.
Specifically, when the same mobile device passes through a plurality of monitoring areas in the monitoring system, the face information which can be extracted by the same target MAC address in different acquisition units can be respectively extracted by using the method.
Step S207: and comparing the same face information in the plurality of acquisition units, and binding and storing the unique face information which is larger than the first threshold value and the target MAC address when the same face information is larger than the first threshold value.
Specifically, the human face information of the plurality of acquisition units is compared to improve the inaccuracy of human face recognition caused by dense human faces in a single monitoring area, and the human face information in the monitoring area corresponding to the plurality of acquisition units is screened to improve the accuracy of the recognition. And binding and storing the unique face information and the target MAC address, the occurrence times of which reach a first threshold value, in the analysis areas of different acquisition units. And further tracking the MAC address for the target MAC address after binding is completed to form a trace graph of the MAC address. In order to ensure uniqueness and accuracy, the face information which is larger than the first threshold value and has the largest occurrence frequency is used as the target face information to be bound with the target MAC address and stored.
In an application manner, please refer to fig. 5, fig. 5 is a flowchart illustrating an implementation manner corresponding to step S207 in fig. 2, where step S207 includes:
step S501: and checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm.
Specifically, the face comparison algorithm can distinguish the same face information from the face information in different acquisition units through pre-training and learning, and further determine the frequency of the same face information appearing in different acquisition units.
Step S502: and according to the time point sequence of the same target MAC address appearing in the plurality of acquisition units, binding and storing the unique face information which reaches the first threshold value firstly in the same face information and the target MAC address.
Specifically, a first threshold is preset at the server, starting from the earliest time point when the target MAC address is acquired, when the mobile device passes through a plurality of monitoring areas in the monitoring system, according to the time sequence, when the same face information in the face information that can be recognized reaches the first threshold and only one face information is available, the unique face information that reaches the first threshold is bound with the target MAC address and stored. For the whole monitoring system, the lower limit of the occurrence frequency of the same face information can be ensured by setting the first threshold, the number of analyzed samples is increased, the accuracy of an analysis result is improved, the same face information is judged according to the time point sequence, the same face information is bound with the target MAC address as long as the unique face information reaches the first threshold, and the binding efficiency can be improved.
Further, if the number of the face information reaching the first threshold is greater than 1, further waiting for the face information which can be identified and corresponds to the target MAC address in other monitoring areas, further determining the face information which has the largest number of occurrences and is unique, and binding and storing the face information which has the largest number of occurrences and is unique with the target MAC address. And if the waiting time exceeds the first time threshold or the number of the face information with the largest occurrence frequency is more than 1, returning to failure in acquiring the face information.
In another application, please refer to fig. 6, fig. 6 is a schematic flowchart illustrating another embodiment corresponding to step S207 in fig. 2, where step S207 includes:
step S601: and checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm.
Specifically, the face comparison algorithm is trained and learned in advance, the same face information in the face information in different acquisition units can be distinguished, and the face comparison algorithm is used for checking the same face information in the face information which can be extracted by the same target MAC address in all the acquisition units connected with the server.
Step S602: and when the same face information exists and is greater than a first threshold value, judging whether the face information with the largest occurrence frequency in the same face information greater than the first threshold value is 1, if so, binding and storing the face information with the largest occurrence frequency and the MAC address, otherwise, returning to failure in acquiring the face information.
Specifically, for the whole monitoring system, the lower limit of the occurrence frequency of the same face information can be ensured by setting the first threshold, the number of analyzed samples is increased, and the accuracy of the analysis result is improved, because the uniqueness of the target MAC address and the binding with the target MAC address are also unique, the face information which can be extracted by different acquisition units is analyzed, when the same face information with the largest occurrence frequency is unique, the face information with the largest occurrence frequency is bound with the target MAC address to obtain the theoretically most accurate binding result, and when the same face information with the largest occurrence frequency is not unique, the face information corresponding to the target MAC address cannot be determined, and the face information acquisition failure is returned.
Further, when the face information bound with the same MAC address is recognized to be changed, the original face information bound with the MAC address is stored to a preset position, and then the bound face information is updated for the MAC address.
Specifically, when the face information obtained by analyzing the same MAC address changes, the original face information bound to the MAC address is stored in a previous user data cache region, and the face information bound to the MAC address is updated to the latest face information after caching is completed, so as to facilitate investigation of mobile device theft cases.
The method for binding the MAC address and the face information provided in this embodiment extracts the face information from different acquisition units, filters the face information to obtain the face information with the highest matching degree with the target MAC address, and then binds the face information with the target MAC address, and can accurately obtain the track of the MAC address through the track of the target MAC address to assist the investigation work of the relevant functional departments.
Referring to fig. 7, fig. 7 is a schematic structural diagram of an embodiment of an analysis server provided in the present application, where the analysis server 70 includes a memory 701 and a processor 702 coupled to each other, where the memory 701 stores program data (not shown), and the processor 702 calls the program data to implement the method for binding based on MAC address and face information in any of the above embodiments, and the description of the related contents refers to the detailed description of the above method embodiments, which is not repeated herein.
Referring to fig. 8, fig. 8 is a schematic structural diagram of an embodiment of an analysis system provided in the present application, where the analysis system 80 includes an analysis server 70 and a collection unit 72 in the above embodiment, where the collection unit 72 includes a video storage device 720 and at least 3 camera devices 722. The camera devices 722 are arranged at different positions of the monitoring area and used for detecting the MAC address of the mobile equipment and shooting video frames, and the video storage device 720 is used for storing the video frames uploaded by the camera devices 722, receiving the serial numbers configured by the analysis server 70 and uploading the configuration information and the serial numbers to the analysis server 70.
Further, the analysis server 70 may be integrated with a data extraction module (not shown) for obtaining face information that can be extracted by the same target MAC address in the multiple acquisition units 72 in the multiple monitoring areas, and a data filtering module (not shown) for comparing the same face information that can be extracted by the multiple acquisition units 72. In other embodiments, the data screening module may also be coupled to the analysis server 70 independently of the analysis system 80.
The analysis system 80 provided by this embodiment can be used in the method for binding the MAC address and the face information in any of the above embodiments, and the arrangement of the analysis system 80 is simple, and it is very convenient and efficient to deploy a brand-new acquisition unit 72 or modify an existing monitoring area.
The above description is only an embodiment of the present application, and not intended to limit the scope of the present application, and all modifications of equivalent structures and equivalent processes, which are made by using the contents of the specification and the drawings, or which are directly or indirectly applied to other related technical fields, are also included in the scope of the present application.

Claims (10)

1. A method for binding with face information based on MAC address is characterized in that the method comprises the following steps:
all the camera devices in the acquisition units responding to the current monitoring area acquire target MAC addresses at the same time point;
acquiring the position information of the mobile equipment according to the signal intensity value of the mobile equipment corresponding to the target MAC address acquired by each camera device and the configuration information of the acquisition unit;
determining analysis areas from video frames acquired by each camera device at the same time point according to the position information of the mobile equipment, and extracting face information from each analysis area;
and determining face information uniquely corresponding to the target MAC address from a plurality of face information corresponding to a plurality of monitoring areas.
2. The method according to claim 1, wherein before obtaining the location information of the mobile device according to the signal strength value of the mobile device corresponding to the target MAC address acquired by each camera and the configuration information of the acquisition unit, the method further comprises:
configuring a number for the acquisition unit;
acquiring configuration information uploaded by the acquisition unit and a number corresponding to the acquisition unit, and binding the configuration information and the number;
the configuration information at least comprises position information of all the camera devices in the acquisition unit, signal intensity values among the camera devices and range values of monitoring areas corresponding to the acquisition unit.
3. The method according to claim 2, wherein the obtaining the location information of the mobile device according to the signal strength value of the mobile device corresponding to the target MAC address acquired by each camera and the configuration information of the acquisition unit comprises:
acquiring the number of the acquisition unit acquiring the target MAC address, and searching and extracting the configuration information of the acquisition unit according to the number;
acquiring the distance between each camera device and the mobile equipment by utilizing the signal intensity value of the mobile equipment corresponding to the target MAC address, the position information of all the camera devices and the signal intensity value among the camera devices, which are acquired by each camera device;
and substituting the distance of each camera device relative to the mobile equipment into a preset positioning algorithm for operation to obtain the position information of the mobile equipment.
4. The method according to claim 2, wherein the determining analysis regions from the video frames captured by each camera at the same time point according to the position information of the mobile device and extracting the face information from each analysis region comprises:
obtaining an analysis area corresponding to the target MAC address in a video frame acquired by each camera device at the same time point by using the position information of the mobile equipment, the position information of the camera device and the range value of the monitoring area corresponding to the acquisition unit;
and analyzing the image in the analysis area by using a face detection algorithm to acquire all face information which can be identified.
5. The method of claim 1, wherein determining the face information uniquely corresponding to the target MAC address from the plurality of face information corresponding to the plurality of monitored areas comprises:
acquiring face information which can be extracted by the same target MAC address in a plurality of acquisition units in a plurality of monitoring areas;
and comparing the same face information in the plurality of acquisition units, and binding and storing the unique face information which is larger than a first threshold value and is larger than the first threshold value and the target MAC address when the same face information is larger than the first threshold value.
6. The method of claim 5, wherein comparing the same face information among the plurality of acquisition units, and when the same face information is greater than a first threshold, binding and saving the unique face information greater than the first threshold with the target MAC address comprises:
checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm;
and according to the time point sequence of the same target MAC address appearing in the plurality of acquisition units, binding and storing the unique face information which reaches the first threshold value firstly in the same face information and the target MAC address.
7. The method of claim 5, wherein comparing the same face information among the plurality of acquisition units, and when the same face information is greater than a first threshold, binding and saving the unique face information greater than the first threshold with the target MAC address comprises:
checking whether the same face information exists in the face information which can be extracted by the same target MAC address in a plurality of acquisition units by using a face comparison algorithm;
and when the same face information exists and is greater than a first threshold value, judging whether the face information with the largest occurrence frequency in the same face information greater than the first threshold value is 1, if so, binding and storing the face information with the largest occurrence frequency and the MAC address, otherwise, returning to failure in acquiring the face information.
8. The method of claim 1, wherein after determining the face information uniquely corresponding to the target MAC address from the plurality of face information corresponding to the plurality of monitored areas, further comprising:
and when the face information bound with the same MAC address is recognized to be changed, storing the original face information bound with the MAC address to a preset position, and further updating the bound face information for the MAC address.
9. An analysis server, comprising: a memory and a processor coupled to each other, wherein the memory stores program data that the processor calls to perform the method of any of claims 1-8.
10. An analysis system, comprising:
the analytics server of claim 9;
the acquisition unit comprises a video storage device and at least 3 camera devices;
the camera device is arranged at different positions of a monitoring area and used for detecting the MAC address of the mobile equipment and shooting a video frame, and the video storage device is used for storing the video frame uploaded by the camera device, receiving the serial number configured by the analysis server and uploading the configuration information and the serial number to the analysis server.
CN202011609448.5A 2020-12-30 2020-12-30 Method, analysis server and system based on MAC address and face information binding Pending CN112733647A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011609448.5A CN112733647A (en) 2020-12-30 2020-12-30 Method, analysis server and system based on MAC address and face information binding

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011609448.5A CN112733647A (en) 2020-12-30 2020-12-30 Method, analysis server and system based on MAC address and face information binding

Publications (1)

Publication Number Publication Date
CN112733647A true CN112733647A (en) 2021-04-30

Family

ID=75610861

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011609448.5A Pending CN112733647A (en) 2020-12-30 2020-12-30 Method, analysis server and system based on MAC address and face information binding

Country Status (1)

Country Link
CN (1) CN112733647A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115119195A (en) * 2022-06-07 2022-09-27 三星电子(中国)研发中心 Method and device for acquiring MAC address of equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110363076A (en) * 2019-06-04 2019-10-22 深圳奇迹智慧网络有限公司 Personal information correlating method, device and terminal device
CN111277788A (en) * 2018-12-04 2020-06-12 北京声迅电子股份有限公司 Monitoring method and monitoring system based on MAC address
WO2020119315A1 (en) * 2018-12-12 2020-06-18 深圳云天励飞技术有限公司 Face acquisition method and related product

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111277788A (en) * 2018-12-04 2020-06-12 北京声迅电子股份有限公司 Monitoring method and monitoring system based on MAC address
WO2020119315A1 (en) * 2018-12-12 2020-06-18 深圳云天励飞技术有限公司 Face acquisition method and related product
CN110363076A (en) * 2019-06-04 2019-10-22 深圳奇迹智慧网络有限公司 Personal information correlating method, device and terminal device

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115119195A (en) * 2022-06-07 2022-09-27 三星电子(中国)研发中心 Method and device for acquiring MAC address of equipment
CN115119195B (en) * 2022-06-07 2024-03-22 三星电子(中国)研发中心 Method and device for acquiring MAC address of equipment

Similar Documents

Publication Publication Date Title
CN109117714B (en) Method, device and system for identifying fellow persons and computer storage medium
CN109886078B (en) Retrieval positioning method and device for target object
CN107292240B (en) Person finding method and system based on face and body recognition
CN106296724B (en) Method and system for determining track information of target person and processing server
CN108269333A (en) Face identification method, application server and computer readable storage medium
CN110443110B (en) Face recognition method, device, terminal and storage medium based on multipath camera shooting
CN110659397B (en) Behavior detection method and device, electronic equipment and storage medium
CN110363076B (en) Personnel information association method and device and terminal equipment
CN109325429B (en) Method, device, storage medium and terminal for associating feature data
CN106027931A (en) Video recording method and server
CN108540752B (en) Method, device and system for identifying target object in video monitoring
CN110969045B (en) Behavior detection method and device, electronic equipment and storage medium
CN112052815B (en) Behavior detection method and device and electronic equipment
CN111277788B (en) Monitoring method and monitoring system based on MAC address
CN109547748B (en) Object foot point determining method and device and storage medium
CN112307868A (en) Image recognition method, electronic device, and computer-readable medium
CN109559336B (en) Object tracking method, device and storage medium
WO2014193220A2 (en) System and method for multiple license plates identification
CN112347856A (en) Non-perception attendance system and method based on classroom scene
CN111753587B (en) Ground falling detection method and device
CN112528099A (en) Personnel peer-to-peer analysis method, system, equipment and medium based on big data
CN112733647A (en) Method, analysis server and system based on MAC address and face information binding
CN111259789A (en) Face recognition intelligent security monitoring management method and system
CN112070035A (en) Target tracking method and device based on video stream and storage medium
CN111615062A (en) Target person positioning method and system based on collision algorithm

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination