CN111263313A - Checking method and device - Google Patents

Checking method and device Download PDF

Info

Publication number
CN111263313A
CN111263313A CN201911197202.9A CN201911197202A CN111263313A CN 111263313 A CN111263313 A CN 111263313A CN 201911197202 A CN201911197202 A CN 201911197202A CN 111263313 A CN111263313 A CN 111263313A
Authority
CN
China
Prior art keywords
detection area
positioning
video stream
target
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201911197202.9A
Other languages
Chinese (zh)
Inventor
赵瑞祥
裘有斌
夏衍
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qing Yanxun Technology Beijing Co Ltd
Original Assignee
Qing Yanxun Technology Beijing Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qing Yanxun Technology Beijing Co Ltd filed Critical Qing Yanxun Technology Beijing Co Ltd
Priority to CN201911197202.9A priority Critical patent/CN111263313A/en
Publication of CN111263313A publication Critical patent/CN111263313A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/029Location-based management or tracking services
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
    • H04N7/181Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast for receiving images from a plurality of remote sources
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/02Services making use of location information
    • H04W4/021Services related to particular areas, e.g. point of interest [POI] services, venue services or geofences
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/30Services specially adapted for particular environments, situations or purposes
    • H04W4/33Services specially adapted for particular environments, situations or purposes for indoor environments, e.g. buildings
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W4/00Services specially adapted for wireless communication networks; Facilities therefor
    • H04W4/80Services using short range communication, e.g. near-field communication [NFC], radio-frequency identification [RFID] or low energy communication
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04WWIRELESS COMMUNICATION NETWORKS
    • H04W64/00Locating users or terminals or network equipment for network management purposes, e.g. mobility management
    • H04W64/006Locating users or terminals or network equipment for network management purposes, e.g. mobility management with additional information processing, e.g. for direction or speed determination

Abstract

The invention provides an inspection method and an inspection device, which are used for solving the problem of automatic inspection, and the method comprises the following steps: if the location corresponding to the first ID is located in the first detection area, and a first biological feature of a first face is detected in the first video stream; judging that the target corresponding to the first ID is located in the first detection area; the method improves the efficiency and accuracy of automatic inspection.

Description

Checking method and device
Technical Field
The present invention relates to the field of data, and in particular, to a method and an apparatus for checking.
Background
In certain characteristic places and management environments, frequent regular or irregular inspections of management objects are often required, such as daily management of criminals by prisons, management of workers by factories or management of animals by farms, and the like.
In the prior art, a roster of managed objects is usually used, and managers check the managed objects one by one on site and count the number of the managed objects. However, the conventional inspection method is not only inefficient and labor-consuming.
Disclosure of Invention
In view of this, embodiments of the present invention provide an inspection method and apparatus, which implement an automatic inspection method, and can determine the positions of inspectors who are not on site through a short-distance wireless positioning manner.
One aspect of the present invention provides a method of inspection, the method including:
if the location corresponding to the first ID is within the first detection zone,
and detecting a first biological characteristic in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
wherein the content of the first and second substances,
the first ID has a corresponding relation with the first biological characteristic;
the first video stream is a video stream generated by shooting the first detection area;
and the target carries a positioning device corresponding to the first ID.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
if the location corresponding to the first ID is within the first detection zone,
it is detected whether a first biometric feature is present in the first video stream.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
obtaining a first set of IDs located within the first detection region,
acquiring a corresponding face set according to the first ID set;
judging whether the face identified in the first video stream is matched with any face in the face set;
and if the first biological characteristics are matched with the first biological characteristics, detecting the first biological characteristics in the first video stream.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
if the target corresponding to the first ID is located in the first detection area,
the first ID in the first set of IDs is deleted,
and continuing to execute the next step to judge whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection area.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
judging whether the first biological feature is detected in a second video stream, wherein the area shot by the second video stream at least comprises the first detection area;
if the location corresponding to the first ID is within the first detection area,
and if the first biological characteristic is detected in the second video stream, judging that the target corresponding to the first ID is located in a first detection area.
Optionally, a shooting angle between the first video stream and the second video stream is greater than 5 degrees.
Optionally, the first detection area is a unidirectional access area.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
the first instruction is received and the first instruction is received,
wherein the content of the first and second substances,
the first instruction indicates a first detection zone location;
the first instruction indicates a set of the second IDs;
the first ID is an ID in the second ID set.
Optionally, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
judging whether the location corresponding to the first ID is located in the first detection area or not through a one-dimensional UWB location coordinate;
andor further comprising:
before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
obtaining the state of the positioning device corresponding to the first ID,
if the state of the positioning device corresponding to the first ID is a wearing state,
and if the positioning corresponding to the first ID is located in the first detection area and the first biological feature is detected in the first video stream, judging that the target corresponding to the first ID is located in the first detection area.
Optionally, the detecting the state of the positioning apparatus corresponding to the first ID includes:
detecting the heart rate of the target through the positioning device,
and/or detecting a tamper circuit state of the positioning device;
if the heart rate of the target is detected and/or the anti-disassembly circuit indicates that the target is not disassembled, the state of the positioning device corresponding to the first ID is a wearing state.
Optionally, the positioning device is a UWB positioning device;
the method comprises the following steps: the first biological feature is a human face;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, sending alarm information;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, obtaining a video corresponding to the positioning according to the positioning corresponding to the first ID;
and/or the method comprises: and if the target corresponding to the first ID is judged to be in the first detection area, starting to acquire the first video stream.
In a second aspect, the present invention also provides an inspection apparatus comprising:
the system comprises a positioning node, a positioning base station, a first image acquisition device and a server;
the first image acquisition equipment is used for providing a first video stream to a server;
the server comprises one or more memories, one or more processors, and one or more modules stored in the memories and configured to be executed by the one or more processors, the one or more modules comprising instructions for, or respectively for:
resolving a positioning signal sent by a positioning node to a positioning base station;
and if the location corresponding to the first ID is located in the first detection area,
and detecting a first biological characteristic in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
wherein the content of the first and second substances,
the first ID has a corresponding relation with the first biological characteristic;
the first video stream is a video stream generated by shooting the first detection area;
and the target carries a positioning device corresponding to the first ID.
And/or:
before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
if the location corresponding to the first ID is within the first detection zone,
it is detected whether a first biometric feature is present in the first video stream.
And/or:
before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
obtaining a first set of IDs located within the first detection region,
acquiring a corresponding face set according to the first ID set;
judging whether the face identified in the first video stream is matched with any face in the face set;
and if the first biological characteristics are matched with the first biological characteristics, detecting the first biological characteristics in the first video stream.
And/or:
before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
if the target corresponding to the first ID is located in the first detection area,
the first ID in the first set of IDs is deleted,
and continuing to execute the next step to judge whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection area.
And/or before the judging that the target corresponding to the first ID is located in the first detection area, the method comprises the following steps:
the first instruction is received and the first instruction is received,
wherein the content of the first and second substances,
the first instruction indicates a first detection zone location;
the first instruction indicates a set of the second IDs;
the first ID is an ID in the second ID set.
Optional
The server is used for providing a first video stream to the first image acquisition device; the one or more modules include instructions for, or respectively for, performing the steps of: before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
judging whether the first biological feature is detected in a second video stream, wherein the area shot by the second video stream at least comprises the first detection area;
if the location corresponding to the first ID is within the first detection area,
and if the first biological characteristic is detected in the second video stream, judging that the target corresponding to the first ID is located in a first detection area.
Optionally, an included angle between the first image acquisition device and the second image acquisition device is greater than 5 degrees.
Optionally, the one or more modules include instructions for or respectively for performing the steps of: and judging whether the location corresponding to the first ID is located in the first detection area or not according to the one-dimensional UWB location coordinate.
Optionally, the positioning node includes a heart rate detection circuit and a tamper circuit, and the server receives heart rate detection data and a tamper circuit status signal;
the one or more modules include instructions for performing the steps of: an instruction corresponding to the method of claim 10.
Optionally, the positioning device is a UWB positioning device;
the server is further used for executing, and the first biological feature is a human face;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, sending alarm information;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, obtaining a video corresponding to the positioning according to the positioning corresponding to the first ID;
and/or the method comprises: and if the target corresponding to the first ID is judged to be in the first detection area, starting to acquire the first video stream.
Optionally, the server at least includes 2 servers, and the 2 servers respectively include partial modules of the plurality of modules.
According to the scheme, the problem that the target is possibly worn by positioning equipment of other people to cause false detection is solved, the arrangement number of cameras is reduced, the time required by target verification is shortened, and the problem that the face identification in the M: N dynamic arrangement scene is inaccurate is solved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, it is obvious that the drawings in the following description are only embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to the provided drawings without creative efforts.
FIG. 1 is a flow chart illustrating an inspection method according to the present invention;
FIG. 2 is a flow chart illustrating another verification method according to the present invention;
fig. 3 is a schematic view illustrating a scene of an activity area of an object to be inspected according to an embodiment of the present invention;
fig. 4 is a schematic diagram illustrating a scenario of performing an inspection through short-range wireless positioning according to an embodiment of the present invention;
FIG. 5 is a schematic diagram illustrating a consistency detection scenario provided by an embodiment of the present invention;
fig. 6 is a schematic view of a scenario corresponding to an embodiment of the present invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention. It is to be understood that the embodiments in this disclosure can be combined arbitrarily without obvious incompatibility, and the combination is not limited in any way in the specific embodiments.
In one embodiment of the present invention, as shown in FIG. 1, the method comprises:
if the location corresponding to the first ID is within the first detection zone,
and detecting a first biological characteristic in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
wherein the content of the first and second substances,
the first ID has a corresponding relation with the first biological characteristic;
the first video stream is a video stream generated by shooting the first detection area;
and the target carries a positioning device corresponding to the first ID.
As shown in fig. 6, in an indoor scenario in an embodiment of the present invention, hardware required for implementing the method includes a node 151 for sending a positioning signal and 153; a reference station 131 that transmits a reference signal; a plurality of base stations that receive the positioning signal and the reference signal; a server 121 that calculates the node position; an image device 141 and a network connecting the base station and the server. In some embodiments, a management terminal 161 may be further included, and the management terminal is configured to configure the system, issue commands, receive alerts, and the like.
The node sending the positioning signal in the invention can be a positioning device in the form of a watch, a work card and the like, the sent positioning signal is a UWB signal, and each node has an ID (identity) which can uniquely determine the node and is also called a first ID; in some embodiments, the target is required to wear the node sending the positioning signal, i.e. the location of the target is known from the location of the node sending the positioning signal.
In one embodiment of the invention, the target is a person and the first biometric characteristic is a human face. The target is required to wear a positioning watch as positioning equipment, and the server records the face image or the characteristic value of the face image wearing the positioning watch, namely the face image or the characteristic value of the face image wearing the watch can be determined through the ID of the watch. However, in practical situations, a target often wears a positioning device of another person, for example, in a roll call scene, in order to avoid roll call, a colleague is often required to wear a work card of the colleague, or a positioning watch, etc., which causes errors in roll call through the positioning device.
The first biometric characteristic is a biometric characteristic used to uniquely identify the target, such as tiger's floral versus tiger, human face versus human, etc. The target 101 may be a human or an animal. It is understood that, if the ID of the positioning device uniquely corresponds to the target, it is known that the ID of the positioning device has a unique correspondence with the first biometric feature corresponding to the target without any doubt, and the present invention is described herein by taking the target as a person and the first biometric feature as a human face, but not limited thereto.
The first detection area is a defined fixed area or is related to an area shot by the camera. For example, the area photographed by the camera 141 and the area where the base station 131 is located both include a common area, and if the location of the first ID is located in the first detection area, the object corresponding to the first ID is located in the first area. Multiple people may be present in the area at the same time, or even densely. The scene is an M: N dynamic control deployment scene, wherein the M: N dynamic control deployment is that all people in the scene are subjected to facial recognition through a computer, namely M people in a picture and N people in a database. Compared with the scenes of 1:1 and 1: N, the dynamic deployment and control of M: N can obviously reduce the time of a user and reduce the number of deployed cameras. However, due to the fact that the method is large in calculation amount and limited by pixels shot by the camera, the face shot by the camera is usually far away and is small, the face is not located in the center of the image, distortion caused by the camera is serious, and the like, in the M: N dynamic deployment and control scene, accuracy of face recognition is low, time delay is long, and the like, and finally the application of face recognition in the M: N dynamic deployment and control scene is small.
For a first detection area, the first detection area is photographed using an image capture device, such as a fixed camera or a polling camera, to generate a first video stream. It will be appreciated that the area captured by the first video stream includes the first detection area, and that it is also possible to capture an area outside the first detection area, for example where the camera is located in a corner of a room, the first detection area being half of the room near the camera, but where the camera can capture the other half of the area.
And detecting the face in the first video stream through an image recognition technology, and if the face appearing in the first video stream comprises a first face corresponding to the first ID, judging that the first face appears in the first video stream, namely, judging that the target appears in the first detection area through the video. It can be understood that, in some embodiments of the present invention, all faces appearing in the first video stream are obtained first, whether the locations of the positioning devices corresponding to the faces appear in the first detection area is determined, and if the locations of the faces appear in the first detection area, the targets corresponding to the positioning devices appear in the first detection area. In another embodiment of the present invention, first, IDs corresponding to all positioning devices appearing in the first detection area are detected, and a corresponding face set is obtained by querying the database according to the IDs, where if a face in the face set appears in the detection result of the first video stream, a target corresponding to the face is located in the first detection area.
It is understood that in this case, the time corresponding to the first video stream corresponds to the time when the first ID location occurs in the first detection region. That is, the time may not need to be kept the same in milliseconds, but needs to be limited to the limit moving speed of the person, so as to ensure the accuracy of the detection.
According to the invention, the target carrying and positioning device 1 judges that the positioning corresponding to the first ID is located in the first detection area through the positioning signal sent by the positioning device, and simultaneously the result of the face recognition of the M: N distribution control scene is obtained, the positioning device judges that the target is located in the first detection area and simultaneously judges that the target is present in the first detection area through the image recognition, so that the accuracy of judging that the target is located in the first detection area is finally improved.
In one embodiment of the invention, the method performed comprises:
if the target corresponding to the first ID is located in the first detection area,
it is identified whether a first biometric characteristic is present in the first video stream.
Namely, in this embodiment, the method performed is:
if the target corresponding to the first ID is located in the first detection area,
identifying whether a first face appears in the first video stream;
if the location corresponding to the first ID is within the first detection zone,
detecting a first face in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
that is, in this embodiment, first, IDs corresponding to all positioning devices appearing in the first detection area are detected, a database is queried according to the IDs to obtain a corresponding face set, and if a face in the face set appears in a detection result of the first video stream, a target corresponding to the face is located in the first detection area.
In the method, when the target corresponding to the first ID is not located in the first detection area, the first video stream is not identified, and in some embodiments, the first ID may be an ID of any positioning device, that is, if any positioning device appears in the first detection area, whether a face appears in the first video stream is triggered to be detected. Since the identification of the first video stream requires more computing resources, the method reduces the time length of the video stream required to be processed by the computer, and reduces the running concurrency and load of the system. In some embodiments, the first video stream is provided by purchasing a third party service to identify faces in the first video stream, the purchase service typically being billed in units of data size or duration, thus in one aspect the above scheme also reduces the cost of running the scheme.
In some embodiments, the detection of the first video stream may be started only when the ID in the ID set of the target set needs to be punched in the first detection area, instead of any ID, that is, only when a specific ID appears in the first detection area, so as to reduce the length of the video stream to be recognized, and reduce the number of faces to be recognized, thereby improving the accuracy of face recognition. In an embodiment of the present disclosure, the above scheme includes the steps of: receiving the first instruction, wherein,
the first instruction indicates a first detection zone location;
the first instruction indicates the set of second IDs.
The first ID is the ID in the second ID set;
if the target corresponding to the first ID is located in the first detection area,
identifying whether a first face appears in the first video stream;
if the location corresponding to the first ID is within the first detection zone,
detecting a first face in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
according to the method, the ID set is pre-selected, so that the situation that a person who does not need to punch a card in the first detection area enters the first detection area and the first video stream is identified by false triggering is avoided, and the length of the video stream which needs to be identified is further reduced. The M: N dynamic control is that all people in a scene are subjected to face recognition through a computer, namely M people in a picture, and N faces in a database, namely the scheme simultaneously receives a first instruction, and a first ID set needing roll calling is indicated in the first instruction, namely the number of N is further limited, so that the number of faces needing recognition is further reduced, and the accuracy of face recognition is improved.
In some aspects, the method performed comprises: obtaining a first set of IDs located within the first detection region,
acquiring a corresponding face set according to the first ID set;
judging whether the face identified in the first video stream is matched with any face in the face set;
the dynamic control of M: N is to carry on the facial recognition to all people in the scene through the computer, namely M person in the picture, N human faces in the database, the above-mentioned scheme, according to the said first ID set obtains the corresponding human face set; and judging whether the face identified in the first video stream is matched with any face in the face set, which is equivalent to reducing the number of M, namely reducing the number of faces to be identified. The number of the faces to be recognized is limited, so that the accuracy of face recognition is improved. In some scenes, the number of faces to be recognized can be further reduced and the accuracy of face recognition can be improved by accepting a first instruction, wherein the first instruction indicates a first ID set needing roll calling, namely, the number of N is further limited.
In some embodiments, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
if the target corresponding to the first ID is located in the first detection area, deleting the first ID in a first ID set;
continuing to execute the next step to judge whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection area; .
The M: N dynamic control is that all people in the scene are subjected to face recognition through a computer, namely M people in the picture, and N human faces in a database, namely the scene corresponding to the M: N dynamic control. The number of the faces to be recognized is limited, so that the accuracy of face recognition is improved.
It will be appreciated that in this case the persons are facing differently in the first detection area, for example in a factory environment or in a jail storm scene, where a worker needs to do work according to his/her requirements on the position without having to take the camera into account, or cannot face the camera at all times, and that in a jail storm scene the migrant of a person is random, and therefore in the above scene the absence list is determined by extending the time for performing the method of this case cyclically until the preset time is reached, according to the list of undetected persons in the first detection area. After the absent personnel list is determined, the absent personnel list can be recorded through an alarm notice or a log mode.
In an embodiment of the disclosure, the first detection area is a unidirectional access area. Through setting up the camera in one-way entry area to more personnel's faces are gathered to faster, thereby can dwindle the required time of detection, reduce the video length that needs discernment.
In one embodiment of the present disclosure, a plurality of cameras are disposed in the first detection area.
In some scenes, a plurality of cameras are arranged in a first detection area with a larger area, for example, one camera is used for providing a first video stream, another camera is used for providing a second video stream, whether the first face is detected in the second video stream or the first video stream is judged, and areas shot by the second video stream of the first video stream respectively shoot different first detection areas; and if the positioning corresponding to the first ID is located in the first detection area and the first face is detected in the second video stream and/or the first video stream, judging that the target corresponding to the first ID is located in the first detection area. The above scheme is possible to call a large scene by the cooperation of a plurality of cameras.
In another scene, a first instruction is received, a first ID set is determined, the first ID is an ID in the first ID set, a certain camera is used for providing a first video stream, another camera is used for providing a second video stream, regions shot by the second video stream of the first video stream at least include the same first detection regions, if the location corresponding to the first ID is located in the first detection region and the first face is detected in the second video stream and/or the first video stream, it is determined that the target corresponding to the first ID is located in the first detection region, the first ID in the first ID set is deleted, and the next step of determining whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection region is continuously performed. According to the scheme, the detection that the first video stream and the second video stream are executed concurrently and share the information of the first ID set needing to be identified, namely if one ID is detected to be located in the first detection area, whether the ID is present in the other video stream does not need to be analyzed when the other video stream is analyzed. The M: N dynamic control is that the computer is used for carrying out face recognition on all people in a scene, namely M people in a picture, and N faces in a database, namely when the video stream analysis is reduced in the scene, the number of the faces needing to be recognized in the video stream is deleted according to the recognition condition of other video streams, namely the number of the faces needing to be recognized is reduced according to the recognition condition of other video streams, and therefore the recognition speed and the recognition accuracy are improved. Therefore, the scheme can be applied to a large number of people and scenes needing quick roll calling, for example, in the prior art, a single ticket gate always checks tickets, and the gate port can simultaneously pass through a plurality of people and can pass through at a high speed, namely, a plurality of cameras are arranged at the gate port to respectively collect a plurality of video streams.
By the method, the shooting included angle between the first video stream and the second video stream is larger than 5 degrees. And the areas shot by the first video stream and the second video stream are unidirectional access areas.
In some embodiments of the present disclosure, the hardware required to implement the method includes a node that sends a positioning signal; a plurality of base stations for receiving the positioning signal; a server for calculating the node position; and a network connecting the base station and the server. The base stations are one-dimensional positioning base stations, for example, 1 base station is installed in one room, the base station can only detect the distance between a node sending a test signal and the base station, it is easy to understand that if the accurate spatial position between the node and the base station needs to be known, at least two-dimensional positioning is needed, but the number of base stations needed for two-dimensional positioning is larger than that of one-dimensional positioning, so the cost is higher than that of one-dimensional positioning. In the above embodiment, the first detection area is an area within a certain radius of the corresponding base station. And if the distance between the positioning equipment and the base station is smaller than a preset threshold value, judging that the positioning equipment is located in a first detection area. By the mode, the deployment number of the base stations is reduced, and meanwhile, the detection accuracy is guaranteed.
In an embodiment of the disclosure, before determining that the target corresponding to the first ID is located in the first detection area, the method includes:
detecting the state of the positioning device corresponding to the first ID,
if the state of the positioning device corresponding to the first ID is a wearing state,
and if the positioning corresponding to the first ID is positioned in the first detection area and the first face appears in the first video stream, judging that the target corresponding to the first ID is positioned in the first detection area.
And if the state of the positioning device corresponding to the first ID is a non-wearing state, judging that the positioning equipment corresponding to the ID is abnormal, and not executing a method for judging whether the target corresponding to the first ID is in the first detection area or directly judging that the target corresponding to the first ID is not in the first detection area. The detecting the state of the positioning device corresponding to the first ID includes:
detecting the heart rate of the target through the positioning device,
and/or detecting a tamper circuit state of the positioning device;
if the heart rate of the target is detected and/or the anti-disassembly circuit indicates that the target is not disassembled, the state of the positioning device corresponding to the first ID is a wearing state.
It can be understood that the tamper circuit is relative to the wearer, that is, in the case of an illegal detachment, the wearer needs to destroy the tamper circuit to detach the positioning device, and the illegal detachment of the positioning device may detect the disconnection of the circuit, for example, the disconnection of the circuit is determined by detecting a low level through the GPIO port of the circuit.
In one embodiment of the disclosure, the method performed comprises:
and if the target corresponding to the first ID is located in a first detection area, starting to acquire the first video stream. Namely, in this embodiment, the method performed is:
and if the target corresponding to the first ID is located in a first detection area, starting to acquire the first video stream. Detecting a first video stream; identifying whether a first face appears in a first video stream; if the location corresponding to the first ID is located in the first detection area, and a first face is detected in the first video stream; it is determined that the target corresponding to the first ID is present within the first detection area.
According to the method, when the target corresponding to the first ID is not located in the first detection area, the first video stream is not collected. In some embodiments, the first ID may be an ID of any location device, that is, if any location device is present in the first detection area, the capturing of the first video stream is triggered, and whether a human face is present in the first video stream is detected, and in other embodiments, the first ID may also be an ID in an ID set of a target set that needs to be punched in the first detection area, instead of any ID, that is, only if a specific ID is present in the first detection area, the capturing of the first video stream is started. The scheme reduces the cost for operating the scheme because the work of the camera needs to consume a certain amount of electricity, and it can be understood that the acquisition starting can also be to start the camera to acquire the first video stream from a standby state or a dormant state.
In one embodiment of the present disclosure, the first instruction comprises: the task information is inspected. The ping request is used to trigger whether the first ID set or the target corresponding to the first ID is located in the first detection area, where the ping execution end may be a server or a device terminal. Wherein the inspection task information indicates an inspection area and an inspection object; the inspection area indicates the first detection area, and the inspection object may be understood as a person or an object to be inspected, i.e., the object, and the first ID set or the first ID may be determined according to the inspection object. In addition, the ping task information may further include: a checking manner, a checking type or a checking time, etc., wherein the checking manner includes: short-distance wireless positioning mode, image recognition mode or mode combining short-distance wireless positioning and image recognition. The ping type may include: automatic inspection and manual inspection. The ping time may include two cases as follows:
1) the current time period for receiving the ping request, for example, the current time period may be within 10s after receiving the ping request;
2) the ping is performed for a specified ping period, for example, 12 to 12: 01 pm.
When the inspection is performed simultaneously in the positioning mode and the image recognition mode, the two modes may be triggered simultaneously, or the inspection operation may be performed in parallel in the same time period, that is, the positioning inspection result and the image inspection result are obtained in the same time period.
In an embodiment of the present invention, the adopted positioning method is a short-distance wireless positioning method, and when performing positioning detection on the positioning device each time, multiple positioning detections are performed within a certain time or based on a preset frequency, and the multiple positioning results are compared, so as to determine a final positioning result of the current positioning detection.
For example, the following steps are carried out: when the inspection object is located and detected, the number of times of location is set based on the refresh rate of the location device, for example, N times of location, and if N times of location is the area a and m times of location is other areas, then if N and m satisfy a certain condition, the location result of the inspection object is considered to be the area a. For example, when N is N and m is 0, the result of positioning the inspection object is considered to be the area a. .
In one embodiment of the invention, for the inspection object which is not successfully inspected, the position of the inspection object which is not successfully inspected can be determined through image recognition and short-distance wireless positioning.
The short-distance wireless positioning mode can be UWB (Ultra Wide Band, Chinese full name: Ultra wideband technology). The UWB has two functions of data transmission and positioning, and is accurate in positioning and high in transmission speed. In addition, the short-distance wireless positioning mode can also adopt Bluetooth or wi-fi and the like.
In one embodiment of the present disclosure, when positioning is performed by short-distance wireless, if an object to be inspected is located at a certain position for a long time, or an object to be inspected is located at a boundary between two regions, a situation that the object is positioned at the boundary between the two regions jumps often occurs. In order to solve the above problem, the present embodiment discloses the following positioning method.
Referring to fig. 2, a schematic flow chart of performing an inspection by short-range wireless positioning according to an embodiment of the present invention is shown, where the method includes:
s201: determining at least one region corresponding to each target, and determining the confidence between any two regions;
in this embodiment, the preset at least one area may be understood as an area where the target may be active, as shown in fig. 3, in the farm, the area where the cows are active includes a milking area, an active area, a feeding area, wherein the feeding area includes: a feeding area 1, a feeding area 2, a feeding area 3 and a feeding area 4.
In this embodiment, the transfer coefficient is preset, the transfer coefficient between two regions with connectivity is x, the transfer coefficient between two regions without connectivity is y, and x > y. For example, x is 1 and y is 0.2. For example, the following steps are carried out: as shown in fig. 3, the region having connectivity includes: the active area and the milking area, the active area and the feeding area 4, the feeding area 4 and the feeding area 3, the feeding area 3 and the feeding area 2, the feeding area 2 and the feeding area 1.
S202: aiming at each target, carrying out multiple positioning detection on each target, and determining the confidence coefficient of each preset region in at least one region according to the positioning result of the multiple positioning detection and the preset transfer coefficients of any two regions;
when checking the target, performing multiple positioning detections on each target, where each positioning detection determines a region, and when each positioning detection is performed, the confidence of the current positioned region may be calculated based on a positioning result and transfer coefficients of any two regions, specifically, S201 includes: and aiming at each positioning detection, respectively adjusting the confidence coefficient of the positioning region corresponding to the current positioning detection and the confidence coefficients of other regions based on the transfer times of the positioning region corresponding to the current positioning detection and the positioning region corresponding to the previous positioning detection.
Here, the confidence of the location detected by the current location in the location area a may be calculated by the following formula 1):
1)Va(k+1)=Va(k)+P*Tb-a
wherein, Va(k +1) represents the confidence of the located position region a at the time of k +1 times of location detection, Va(k) Represents the confidence of the location area a at the k-th localization detection, p represents the area weight, Tb-aIndicating the transfer coefficient between location area b and location area a.
The method for determining the regional weight comprises the following steps:
in a first mode
And performing multiple positioning communication on the area based on the set frequency during each positioning detection, and determining the area weight based on the positioning result values of the multiple positioning communication. For example, in one positioning test, it is set that 10 positioning communications are required to be performed per positioning test (i.e., 10 positioning result values are obtained per positioning test). For example: the number of positioning communications performed for each positioning detection is N, and the obtained N positioning result values are all located in the position area a, where p is equal to pmaxWhen only part of the positioning result values are located in the position area a, p < pmaxE.g. p-1/2max,. The positioning result value of the positioning communication may be a coordinate value.
Mode two
In each positioning detection, it is set that 1 positioning communication is required to be executed in each positioning detection (i.e. 1 positioning result value can be obtained in each positioning detection), and the area weight is determined according to the continuous M positioning result values. For example: when M is 3, the positioning result value of the K-th positioning communication is located in the area a, and the positioning result values of the K-2 and K-1-th positioning communications are also located in the position area a, p is pmaxWhen there is a partial result as the position area a; when only part of the K-2, K-1 and Kth positioning results are located in the position area a, p is less than pmax
For example p 1/2max
Mode III
In each positioning detection, it is set that 1 positioning communication is required to be executed in each positioning detection (that is, 1 positioning result value can be obtained in each positioning detection), and the area weight is determined according to the positioning result value of the positioning communication and the error value corresponding to the result value.
Taking the weighted least square positioning algorithm as an example, the observation equation is z ═ Hx + v, z is the measurement vector, H is the observation matrix, v is the measurement error, and the measurement variance E (vv) isT) The standard weighted least squares algorithm has an estimate of r
Figure RE-GDA0002469142770000171
Error variance of
Figure RE-GDA0002469142770000172
The positioning error vector err may be set to 3 times variance
Figure RE-GDA0002469142770000173
Then 99.7% of the probabilistic positioning results fall within the positioning error. In general, the radius r of the error circle can be set according to the maximum value of err in x, y 2 dimensions, and the positioning result value is located in the estimated value
Figure RE-GDA0002469142770000174
Centered within an error radius of r. If the circle is located entirely within the location area a, the area weight p ═ pmaxWhen there is a partial result as the position area a, then p < pmaxE.g. p-1/2max
In this embodiment, in some special cases, for example, if the last located position area is a, and the current located position area is also a, the transfer coefficient is Ta-aThe coefficient of transfer Va-aA maximum value can be taken, and in this case, it can be calculated by the following equation 2):
2)Va(k+1)=Va(k)+P*Ta-a
wherein, the confidence of other regions can be calculated by the following formula 3), that is, when the last located position region is a and the current located position region is also a, the confidence of other region b is:
3)Vb(k+1)=Vb(k)-P*Tb-a
wherein, Vb(k +1) represents the confidence that the location is in the region b in the k +1 th location detection, Vb(k) Represents the confidence of the region b at the k-th detection, P represents the region weight, Tb-aThe transfer coefficient between the region b and the region a is represented.
Further, in order to avoid the infinite increase of the confidence of a certain region due to the too high positioning frequency, the operation accuracy and the speed are reduced, and also in order to avoid the infinite decrease of the confidence of a certain region, the confidence threshold is set, including the highest confidence threshold and the lowest confidence threshold, for example, the highest confidence threshold may be set to 50, and the lowest confidence threshold may be set to 0.
S203: and selecting the position area with the highest confidence coefficient as a first target position area corresponding to the target.
In this embodiment, the confidences of all the active regions are obtained through the calculation of the above S201 to S202, and the highest confidence is selected as the positioning result, that is, the first target position region.
Further, as shown in fig. 4, the calculation of the region confidence is further described based on the actual scene:
1) assume that the regions of target activity include: outdoor, factory and corridor, wherein the corridor and the factory are communicated through doorway 1, and the corridor and the outdoor are communicated through doorway 2. The movement trace of the inspection person is shown as trace 1 and trace 2 in the figure, wherein trace 1 indicates that the inspection person walks from the corridor to the factory, and trace 2 indicates that the inspection person is always active at the factory, but is located at the boundary between the factory and the outside. Positions 4,5,6,7 in track 2 belong to the points of the positional flapping, and positions 8,9 in track 1 belong to the points of the positional flapping.
2) Defining a transfer coefficient between any two regions based on whether the regions are connected, for exampleSuch as TOutdoor-corridor=1,TCorridor-plant=1,TOutdoor-factory=0.2。
3) For track 1, position point 1 and its preceding series are all within the corridor, so a greater confidence V of the corridor is obtainedCorridor (W),VOutdoors0 and VFactoryWhen location point 2 enters the factory, V is 0 assuming that all possible areas of location point 2 are within the factory as determined by at least one location communicationCorridor (W)=VCorridor-TCorridor-plantP, at this time if VCorridor (W)Or is greater than VFactoryThe positioning result is then constrained to the corridor. With the generation of new location points 3,4, VFactoryWill increase rapidly, VCorridor (W)Decreases rapidly, and the trajectory is not located outdoors, then VOutdoors0, such that VFactory>VCorridor (W)The positioning results will be constrained to the factory. This is also the case for the track 2.
In the above embodiment, the target is located for multiple times, the confidence of each region is calculated based on the results of the multiple times of location and the transfer coefficients of any two regions, and the highest confidence is selected as the final location result. This greatly reduces the probability of erroneous positioning results due to positioning drift.
In an embodiment of the present disclosure, before determining whether the target corresponding to the first ID appears in the first detection area, a method for automatically associating the first ID with the first face is further included. In some scenarios, the objects entering the field of view of the imaging device may include one or more objects. And in the case that only one object is in the visual field range of the image equipment, the positioning equipment closest to the image equipment is considered as the positioning equipment carried by the object. When a plurality of target positioning devices enter the image device, the corresponding relationship between the positioning device and the carried object may be determined according to the movement trajectory of the target positioning device entering the visual field range of the image device, and further, when the movement trajectories of the visual field range of the image device are consistent, the corresponding relationship between the target positioning device and the carried object may be determined according to the distance between the target positioning device and the image device, or the corresponding relationship between the target positioning device and the carried person (face) may be determined according to the time when the target positioning device enters the image device.
Based on the method of the foregoing embodiment, the first inspection area is an arbitrary inspection area, that is, the inspection area is not specified, and only the object appears in front of an arbitrary camera linked with the positioning system, that is, the roll name is considered to be present, that is, in an embodiment, for an object that is not successfully inspected in the object inspection result, that is, an object that is marked as "not arrived", the position of the object that is not inspected may be determined in a video linkage manner, and specifically, the method further includes:
acquiring a third target position area of the positioning equipment corresponding to the target which is not successfully inspected in the target inspection result;
determining a third target image device according to the position of the positioning device of the target which is not successfully inspected;
detecting whether the target which is not successfully inspected is within the detection range of the third image device;
if the unsuccessfully inspected target is within the detection range of the third target image device, the third target position area of the positioning device is used as the position area of the unsuccessfully inspected target, and in some embodiments, a video corresponding to the position area may be further provided to the server.
The present invention includes an inspection apparatus, as shown in fig. 6, for implementing the method provided by the present invention, the apparatus includes a positioning node 151, a positioning base station 131, a first image collecting device 141, and a server 121;
the first image acquisition equipment is used for providing a first video stream to a server;
the server includes one or more memories, one or more processors, and one or more modules stored in the memories and configured to be executed by the one or more processors, and the server is understood as a general term for a computer with computing capability and not as a physically independent server hardware, that is, the server may be a separate server hardware or at least include 2 separate server hardware, and different server hardware respectively has different program modules for respectively executing partial steps of the present invention. The server comprises one or more modules including or including means for performing the steps or methods, respectively:
resolving a positioning signal sent by a positioning node to a positioning base station;
a method of performing a ping method 1, the method comprising:
if the location corresponding to the first ID is within the first detection zone,
and detecting a first biological characteristic in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
wherein the content of the first and second substances,
the first ID has a corresponding relation with the first biological characteristic;
the first video stream is a video stream generated by shooting the first detection area;
and the target carries a positioning device corresponding to the first ID.
Or performing the inspection method 2, the method including:
if the location corresponding to the first ID is within the first detection zone,
it is detected whether a first biometric feature of a first face is present in the first video stream.
Or performing the inspection method 3, the method including:
obtaining a first set of IDs located within the first detection region,
acquiring a corresponding face set according to the first ID set;
judging whether the face identified in the first video stream is matched with any face in the face set;
and if the first face and the second face are matched, detecting a first biological characteristic of the first face in the first video stream.
Or performing a verification method 4, the method comprising:
if the target corresponding to the first ID is located in the first detection area,
the first ID in the first set of IDs is deleted,
and continuing to execute the next step to judge whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection area.
The server performs one of the above-described inspection methods 1 to 4 or a combination of several methods.
In some embodiments the apparatus further comprises a second image capturing device for providing a second video stream to the server;
the one or more modules of the server include or include means for performing a ping method 5, respectively, the method comprising:
judging whether a first biological feature of the first face is detected in a second video stream, wherein the area shot by the second video stream at least comprises the first detection area;
if the location corresponding to the first ID is within the first detection area,
and if the first biological feature of the first face is detected in the second video stream, judging that the target corresponding to the first ID is located in a first detection area.
In some embodiments, the first image capture device is at an angle greater than 5 degrees to the second image capture device.
In some embodiments, the positioning node, the positioning base station, and the server form a positioning system, and the one or more modules include instructions for: and judging whether the location corresponding to the first ID is located in the first detection area or not according to the one-dimensional UWB location coordinate.
In some embodiments, the positioning node comprises a heart rate detection circuit and a tamper circuit, and the server receives heart rate detection data and a tamper circuit status signal;
the one or more modules of the server comprise or comprise means for performing the following methods, respectively:
an inspection method 10, comprising:
obtaining the state of the positioning device corresponding to the first ID,
if the state of the positioning device corresponding to the first ID is a wearing state,
and if the positioning corresponding to the first ID is located in the first detection area and the first biological feature of the first face appears in the first video stream, judging that the target corresponding to the first ID is located in the first detection area.
In some embodiments, the positioning device is a UWB positioning device;
the one or more modules of the server include or include means for performing a ping method 12, respectively, the method comprising:
the first biological feature is a human face;
if the target corresponding to the first ID is judged not to be in the first detection area, sending alarm information;
if the target corresponding to the first ID is judged not to be in the first detection area, obtaining a video corresponding to the positioning according to the positioning corresponding to the first ID;
and if the target corresponding to the first ID is judged to be in the first detection area, starting to acquire the first video stream.
It should be noted that, in the present specification, the embodiments are all described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments may be referred to each other.
The previous description of the disclosed embodiments is provided to enable any person skilled in the art to make or use the present invention. Various modifications to these embodiments will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other embodiments without departing from the spirit or scope of the invention. Thus, the present invention is not intended to be limited to the embodiments shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.

Claims (10)

1. A method of verification, comprising:
if the location corresponding to the first ID is within the first detection zone,
and detecting a first biological characteristic in the first video stream;
judging that the target corresponding to the first ID is located in the first detection area;
wherein the content of the first and second substances,
the first ID has a corresponding relation with the first biological characteristic;
the first video stream is a video stream generated by shooting the first detection area;
and the target carries a positioning device corresponding to the first ID.
2. The method according to claim 1, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
if the location corresponding to the first ID is within the first detection zone,
it is detected whether a first biometric feature is present in the first video stream.
3. The method according to claim 1, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
obtaining a first set of IDs located within the first detection region,
acquiring a corresponding face set according to the first ID set;
judging whether the face identified in the first video stream is matched with any face in the face set;
and if the first biological characteristics are matched with the first biological characteristics, detecting the first biological characteristics in the first video stream.
4. The method according to claim 3, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
if the target corresponding to the first ID is located in the first detection area,
the first ID in the first set of IDs is deleted,
and continuing to execute the next step to judge whether the targets corresponding to the remaining first IDs in the first ID set appear in the first detection area.
5. The method according to claim 4, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
judging whether the first biological feature is detected in a second video stream, wherein the area shot by the second video stream at least comprises the first detection area;
if the location corresponding to the first ID is within the first detection area,
and if the first biological characteristic is detected in the second video stream, judging that the target corresponding to the first ID is located in a first detection area.
6. The method of claim 5,
and the shooting included angle between the first video stream and the second video stream is more than 5 degrees.
7. The method of claim 1,
the first detection area is a one-way entrance area.
8. The method according to claim 1, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
the first instruction is received and the first instruction is received,
wherein the content of the first and second substances,
the first instruction indicates a first detection zone location;
the first instruction indicates a set of the second IDs;
the first ID is an ID in the second ID set.
9. The method according to claim 1, wherein before determining that the target corresponding to the first ID is located in the first detection area, the method comprises:
judging whether the location corresponding to the first ID is located in the first detection area or not through a one-dimensional UWB location coordinate;
andor further comprising:
before the step of judging that the target corresponding to the first ID is located in the first detection area, the method includes:
obtaining the state of the positioning device corresponding to the first ID,
if the state of the positioning device corresponding to the first ID is a wearing state,
and if the positioning corresponding to the first ID is located in the first detection area and the first biological feature is detected in the first video stream, judging that the target corresponding to the first ID is located in the first detection area.
Andor further comprising:
the detecting the state of the positioning device corresponding to the first ID includes:
detecting the heart rate of the target through the positioning device,
and/or detecting a tamper circuit state of the positioning device;
if the heart rate of the target is detected and/or the anti-disassembly circuit indicates that the target is not disassembled, the state of the positioning device corresponding to the first ID is a wearing state.
Andor further comprising:
the positioning device is a UWB positioning device;
the method comprises the following steps: the first biological feature is a human face;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, sending alarm information;
and/or the method comprises: if the target corresponding to the first ID is judged not to be in the first detection area, obtaining a video corresponding to the positioning according to the positioning corresponding to the first ID;
and/or the method comprises: and if the target corresponding to the first ID is judged to be in the first detection area, starting to acquire the first video stream.
10. An inspection apparatus, comprising:
the system comprises a positioning node, a positioning base station, a first image acquisition device and a server;
the first image acquisition equipment is used for providing a first video stream to a server;
the server comprises one or more memories, one or more processors, and one or more modules stored in the memories and configured to be executed by the one or more processors, the one or more modules comprising instructions for, or respectively for:
resolving a positioning signal sent by a positioning node to a positioning base station;
and instructions corresponding to the method of any of claims 1 to 4 and 8.
CN201911197202.9A 2019-11-29 2019-11-29 Checking method and device Pending CN111263313A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911197202.9A CN111263313A (en) 2019-11-29 2019-11-29 Checking method and device

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911197202.9A CN111263313A (en) 2019-11-29 2019-11-29 Checking method and device

Publications (1)

Publication Number Publication Date
CN111263313A true CN111263313A (en) 2020-06-09

Family

ID=70950910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911197202.9A Pending CN111263313A (en) 2019-11-29 2019-11-29 Checking method and device

Country Status (1)

Country Link
CN (1) CN111263313A (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169071A (en) * 2016-07-05 2016-11-30 厦门理工学院 A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN106303438A (en) * 2016-08-26 2017-01-04 无锡卓信信息科技股份有限公司 A kind of indoor video monitoring system for Prison staff
CN106339650A (en) * 2016-08-26 2017-01-18 无锡卓信信息科技股份有限公司 Prison personnel indoor positioning control system based on RFID technology
CN107516353A (en) * 2017-08-31 2017-12-26 绵阳鑫阳知识产权运营有限公司 The human face identification work-attendance checking system of region detection
CN108846848A (en) * 2018-06-25 2018-11-20 广东电网有限责任公司电力科学研究院 A kind of the operation field method for early warning and device of fusion UWB positioning and video identification
CN110276261A (en) * 2019-05-23 2019-09-24 平安科技(深圳)有限公司 Personnel automatically track monitoring method, device, computer equipment and storage medium
CN110379030A (en) * 2019-07-22 2019-10-25 苏州真趣信息科技有限公司 The method, apparatus that is authenticated using positioning cards, medium

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106169071A (en) * 2016-07-05 2016-11-30 厦门理工学院 A kind of Work attendance method based on dynamic human face and chest card recognition and system
CN106303438A (en) * 2016-08-26 2017-01-04 无锡卓信信息科技股份有限公司 A kind of indoor video monitoring system for Prison staff
CN106339650A (en) * 2016-08-26 2017-01-18 无锡卓信信息科技股份有限公司 Prison personnel indoor positioning control system based on RFID technology
CN107516353A (en) * 2017-08-31 2017-12-26 绵阳鑫阳知识产权运营有限公司 The human face identification work-attendance checking system of region detection
CN108846848A (en) * 2018-06-25 2018-11-20 广东电网有限责任公司电力科学研究院 A kind of the operation field method for early warning and device of fusion UWB positioning and video identification
CN110276261A (en) * 2019-05-23 2019-09-24 平安科技(深圳)有限公司 Personnel automatically track monitoring method, device, computer equipment and storage medium
CN110379030A (en) * 2019-07-22 2019-10-25 苏州真趣信息科技有限公司 The method, apparatus that is authenticated using positioning cards, medium

Similar Documents

Publication Publication Date Title
CN107590439A (en) Target person identification method for tracing and device based on monitor video
CN106878666A (en) The methods, devices and systems of destination object are searched based on CCTV camera
US10261164B2 (en) Active person positioning device and activity data acquisition device
JP6277766B2 (en) Pest occurrence prediction system, terminal device, server device, and pest occurrence prediction method
US20160098603A1 (en) Depth camera based detection of human subjects
CN109960969B (en) Method, device and system for generating moving route
CN106228218A (en) The intelligent control method of a kind of destination object based on movement and system
CN115223105B (en) Big data based risk information monitoring and analyzing method and system
CN110830772A (en) Kitchen video analysis resource scheduling method, device and system
CN110826496A (en) Crowd density estimation method, device, equipment and storage medium
CN113688794A (en) Identity recognition method and device, electronic equipment and computer readable storage medium
KR102244878B1 (en) Cctv security system and method based on artificial intelligence
CN109800656B (en) Positioning method and related product
CN111652128B (en) High-altitude power operation safety monitoring method, system and storage device
CN111091047B (en) Living body detection method and device, server and face recognition equipment
CN105632003A (en) Evaluation method and apparatus thereof for customs clearance port queuing time
CN112070185A (en) Re-ID-based non-contact fever person tracking system and tracking method thereof
CN111263313A (en) Checking method and device
CN109327681B (en) Specific personnel identification alarm system and method thereof
CN113837138B (en) Dressing monitoring method, dressing monitoring system, dressing monitoring medium and electronic terminal
CN109120896B (en) Security video monitoring guard system
CN112235589B (en) Live network identification method, edge server, computer equipment and storage medium
CN115953815A (en) Monitoring method and device for infrastructure site
CN114726841A (en) Scenic spot management method based on Internet of things platform
CN114463873A (en) Patrol system for community

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination