WO2022160616A1 - 通行检测方法及装置、电子设备和计算机可读存储介质 - Google Patents
通行检测方法及装置、电子设备和计算机可读存储介质 Download PDFInfo
- Publication number
- WO2022160616A1 WO2022160616A1 PCT/CN2021/106907 CN2021106907W WO2022160616A1 WO 2022160616 A1 WO2022160616 A1 WO 2022160616A1 CN 2021106907 W CN2021106907 W CN 2021106907W WO 2022160616 A1 WO2022160616 A1 WO 2022160616A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- position information
- target
- video
- frame
- access control
- Prior art date
Links
- 238000001514 detection method Methods 0.000 title claims abstract description 148
- 238000012545 processing Methods 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 66
- 238000012795 verification Methods 0.000 claims description 59
- 230000006399 behavior Effects 0.000 claims description 33
- PICXIOQBANWBIZ-UHFFFAOYSA-N zinc;1-oxidopyridine-2-thione Chemical group [Zn+2].[O-]N1C=CC=CC1=S.[O-]N1C=CC=CC1=S PICXIOQBANWBIZ-UHFFFAOYSA-N 0.000 claims description 17
- 230000009471 action Effects 0.000 claims description 15
- 238000004590 computer program Methods 0.000 claims description 15
- 230000008569 process Effects 0.000 claims description 13
- 230000033001 locomotion Effects 0.000 claims description 12
- 238000009432 framing Methods 0.000 claims description 6
- 238000010586 diagram Methods 0.000 description 26
- 238000000605 extraction Methods 0.000 description 15
- 238000004891 communication Methods 0.000 description 10
- 230000006870 function Effects 0.000 description 10
- 208000012260 Accidental injury Diseases 0.000 description 5
- 230000005540 biological transmission Effects 0.000 description 5
- 238000005516 engineering process Methods 0.000 description 5
- 238000005070 sampling Methods 0.000 description 5
- 208000014674 injury Diseases 0.000 description 4
- 230000003287 optical effect Effects 0.000 description 4
- 230000005236 sound signal Effects 0.000 description 4
- 238000011835 investigation Methods 0.000 description 3
- 230000001133 acceleration Effects 0.000 description 2
- 238000003491 array Methods 0.000 description 2
- 239000000835 fiber Substances 0.000 description 2
- 238000001914 filtration Methods 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 238000013528 artificial neural network Methods 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 230000001419 dependent effect Effects 0.000 description 1
- 238000005553 drilling Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000006872 improvement Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000002093 peripheral effect Effects 0.000 description 1
- 230000004044 response Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/30—Individual registration on entry or exit not involving the use of a pass
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/20—Image preprocessing
- G06V10/25—Determination of region of interest [ROI] or a volume of interest [VOI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V40/00—Recognition of biometric, human-related or animal-related patterns in image or video data
- G06V40/10—Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
-
- G—PHYSICS
- G07—CHECKING-DEVICES
- G07C—TIME OR ATTENDANCE REGISTERS; REGISTERING OR INDICATING THE WORKING OF MACHINES; GENERATING RANDOM NUMBERS; VOTING OR LOTTERY APPARATUS; ARRANGEMENTS, SYSTEMS OR APPARATUS FOR CHECKING NOT PROVIDED FOR ELSEWHERE
- G07C9/00—Individual registration on entry or exit
- G07C9/10—Movable barriers with registering means
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
Definitions
- the present disclosure relates to the technical field of computer vision, and in particular, to a method and apparatus for passing detection, an electronic device, and a computer-readable storage medium.
- Access control systems usually include gates and other equipment that are released after identity verification, and are widely used in various subways, airports, government agencies, and other scenarios where identity verification is required.
- passers-by usually stand at the entrance of the gate for verification. After the gate is opened, a single person passes quickly. Multiple sensors in the passage can judge the status and legality of the passage, and decide whether to close the gate and whether to give an alarm.
- the present disclosure provides a pass detection method and device, an electronic device, and a computer-readable storage medium.
- a passing detection method comprising: performing detection processing on a video frame of a detection video, and obtaining first position information of an object to be passed in the video frame and second position information of a target item,
- the detection video is a video shot at the passage; according to the first position information and the second position information, the object to be passed is matched with the target item, and the target object with the target item is determined; at least according to The target position information of the target object generates a detection result, and sends the detection result to the access control system of the passage.
- a detection result can be generated when it is detected that an object to be passed carries a target item.
- the detection result can be used to indicate that the object that is passing through the access control system is carrying the target item, then the access control system can wait for the target object and the target item carried by it to pass through the passageway before closing the access control system, which can reduce the failure or accidental injury of the object carrying the target item. and so on.
- the detection video at the passageway is acquired through the image acquisition component, and the detection video is processed to generate the detection result, without overly relying on the sensors of the access control system, which can reduce the number of sensors at the access control system and reduce the space occupied by the sensors. And the accuracy of judging the passing state of the object to be passed can be improved.
- the method further includes: in the case that the target location information indicates that the target object is located at a preset location, receiving verification information of the access control system; wherein, at least according to the Generating a detection result from the target position information of the target object includes: generating the detection result when the verification information is verified as passed.
- the first position information includes the position information of the first selection frame of the head and shoulders of the frame selection object, wherein the detection process is performed on the video frame of the detected video to obtain the video frame
- the first position information of the object to be passed includes: performing detection processing on the video frame of the detection video to obtain the position information of the first selection frame of the head and shoulder part of the frame selection object.
- the preset position includes a preset area outside the access control system of the passageway, and when the target position information indicates that the target object is located at the preset position, receiving the The verification information of the access control system includes: determining the position information of the first selection frame of a plurality of objects in the video frame; the first selection frame of the target object is located in the preset area, or the first selection frame and the In the case that the intersection ratio of the preset area is greater than or equal to the first threshold, the verification information of the access control system is received.
- the method further includes: in the case that the first position information indicates that the object is at a preset position, performing motion recognition processing on the object, and determining whether the object is in a preset position. Predetermined behavior; when the object performs the predetermined behavior, save the video segment in the detection video recording the predetermined behavior.
- violations can be detected when objects pass through the access control system, and video records of the violations can be saved to facilitate investigation of violations and provide a basis for access control management.
- the first position information includes position information of a second selection box for frame selection of the body part of the object
- the second position information includes a third selection frame for frame selection of the target item
- the object to be passed is matched with the target item
- the target object with the target item is determined, including: in the second selection frame of the first object
- the intersection ratio with the third selection box is greater than or equal to the second threshold, determine the first object as a target object matching the target item, wherein the first object is any object .
- the method further includes: generating a passage record when the object passes through the passageway.
- generating a pass record includes: when the target object and the target item matching the target object pass through the passageway case, pass records are generated.
- a traffic detection device the device includes an image acquisition component and a processing component, the image acquisition component is configured to acquire a detection video of a traffic channel; the processing component is configured to Perform detection processing on the video frame of the detected video to obtain the first position information of the object to be passed in the video frame and the second position information of the target item; according to the first position information and the second position information, the to-be-passed object is The object is matched with the target item to determine the target object with the target item; a detection result is generated at least according to the target position information of the target object, and the detection result is sent to the access control system of the passageway.
- the processing component is further configured to receive verification information of the access control system when the target position information indicates that the target object is located at a preset position; wherein at least according to Generating a detection result from the target position information of the target object includes: generating the detection result when the verification information is verified as passed.
- the first position information includes position information of the first selection frame of the head and shoulders of the frame selection object, wherein the processing component is specifically configured to perform the detection on the video frame of the video. In the detection process, the position information of the first selection frame of the head and shoulders part of the frame selection object is obtained.
- the preset position includes a preset area outside the access control system of the passageway
- the processing component is further configured to determine the position information of the first selection frame of the multiple objects in the video frame; the first selection frame of the target object is located in the preset area , or when the intersection ratio of the first selection box and the preset area is greater than or equal to the first threshold, the verification information of the access control system is received.
- the apparatus further includes a storage component, and the processing component is further configured to, in the case that the first location information indicates that the object is located at a preset location, perform the operation on the object.
- the action recognition process determines whether the object performs a predetermined behavior; in the case that the object performs the predetermined behavior, saves the video segment in the detection video recording the predetermined behavior to the storage component.
- the first position information includes position information of a second selection box for frame selection of the body part of the object
- the second position information includes the position information of a third selection frame for frame selection of the target item.
- the location information, the processing component is further configured to determine the first object as a case where the intersection ratio of the second selection box and the third selection box of the first object is greater than or equal to a second threshold A target object matching the target item, wherein the first object is any object.
- the device processing component is further configured to generate a passing record when the object passes through the passing channel.
- the processing component is further configured to generate a passing record when the target object and the target item matching the target object pass through the passageway.
- an electronic device comprising:
- memory for storing processor-executable instructions
- the processor is configured to: execute the above-mentioned passing detection method.
- a computer-readable storage medium on which computer program instructions are stored, and when the computer program instructions are executed by a processor, implement the above-mentioned method of passing detection.
- a computer program including computer-readable codes.
- a processor in the electronic device executes the peer detection method provided by the present disclosure.
- FIG. 1A shows a schematic diagram 1 of an application scenario of an electronic device according to an embodiment of the present disclosure
- FIG. 1B shows a second schematic diagram of an application scenario of an electronic device according to an embodiment of the present disclosure
- FIG. 2 shows a flow chart 1 of a method for traffic detection according to an embodiment of the present disclosure
- FIG. 3 shows a schematic diagram of a target detection network according to an embodiment of the present disclosure
- FIG. 4 shows a schematic diagram of an action recognition network according to an embodiment of the present disclosure
- FIG. 5 shows a schematic diagram of the application of the method for detecting traffic according to an embodiment of the present disclosure
- FIG. 6 shows a second flow chart of a method for detecting traffic according to an embodiment of the present disclosure
- FIG. 7 shows a block diagram of a traffic detection apparatus according to an embodiment of the present disclosure
- FIG. 8 shows a block diagram 1 of an electronic device according to an embodiment of the present disclosure
- FIG. 9 shows a second block diagram of an electronic device according to an embodiment of the present disclosure.
- the execution body of the pass detection method provided by the embodiment of the present disclosure may be an electronic device, and may also be understood as a processing component in the electronic device.
- the electronic device 10 may include a processing component 11 and an image acquisition component 12 inside.
- the electronic device 10 may be disposed at the passageway, and the electronic device 11 may collect the detection video at the passageway in real time through the image acquisition component 12, and perform detection processing on the detection video by the processing component 11 to generate the detection result.
- the processing component 11 can transmit the detection result to the access control system 20 of the passageway.
- the electronic device 10 may be connected to the access control system 20 in a wired or wireless manner.
- the electronic device 10 can be connected to the access control system 20 through a wireless WiFi network, and the electronic device can also be connected to the access control system through a transmission line.
- the embodiment of the present application does not limit the connection method of the electronic device 10 and the access control system 20.
- the electronic device 10 may only include the processing component 11 .
- the electronic device 10 can be connected to the image capture device 30 including the image capture component.
- the electronic device 10 can be connected to the image capture device 30 through a USB interface or a WiFi network.
- the embodiment of the present disclosure does not limit the connection method.
- the image capturing device 30 may be disposed at the passageway, and is used to capture the detection video at the passageway.
- the processing component 11 of the electronic device 10 can receive the detection video transmitted by the independent image acquisition device 30 through the network, and the processing component 11 of the electronic device 10 can analyze and process the received detection video to generate a detection result. In this way, the processing component 11 of the electronic device 10 can send the detection result to the access control system 20 of the passage through the network.
- FIG. 2 shows a flow chart of a method for detecting traffic according to an embodiment of the present disclosure. As shown in FIG. 2, the method includes:
- Step S11 performing detection processing on the video frame of the detection video, to obtain the first position information of the object to be passed in the video frame, and the second position information of the target item, and the detection video is a video shot at the passage;
- Step S12 according to the first position information and the second position information, among the objects to be passed, determine a target object that matches the target item;
- step S13 a detection result is generated at least according to the target position information of the target object, and the detection result is sent to the access control system of the passage.
- a detection result can be generated when it is detected that an object to be passed carries a target item.
- the detection result can be used to indicate that the object that is passing through the access control system is carrying the target item, then the access control system can wait for the target object and the target item carried by it to pass through the passageway before closing the access control system, which can reduce the failure or accidental injury of the object carrying the target item. and so on.
- the electronic device can obtain the detection video at the passageway, and process the detection video to generate the detection result, without relying too much on the sensor of the access control system, which can reduce the number of sensors at the access control system, reduce the space occupied by the sensor, and can Improve the accuracy of judging the passing status of objects to be passed.
- some or all of the objects to be passed may carry certain items or ride certain items, for example, some objects may carry strollers, luggage, etc., or some objects may ride wheelchair etc. Due to the large size of these items, it is easy to cause errors in the judgment of the access control system.
- the access control system (such as a gate, etc.) closes quickly, resulting in failure of the object to pass.
- the gate closes when the object passes but the item has not passed, so that the item cannot pass, or the item passes but the object has not yet passed.
- the access control system may be prompted when a person carrying a baby stroller, luggage, etc. passes through, for example, it may be informed that the access control system is passing through two targets (that is, people and item two targets), closes after both targets have passed.
- the detection video may be captured by an image acquisition component such as a camera, and the detection video is a video captured at a passageway.
- the camera can be set above the access control system, and the detection video can be shot from a top-down angle.
- the detection video one or more objects to be passed can be captured. If some objects carry luggage, If the target item of the stroller is the target item, the above-mentioned target item can be captured in the detection video.
- the processing component can receive the detection video, perform detection processing on the video frame, and can also determine the target object carrying the target item through matching, and when the target object passes through the access control system, generate the detection result and send it to the access control system, so that the access control system is in The target object and the target item are both passed and then closed.
- the processing component can determine the first position information of the object to be passed in the video frame and the second position information of the target item, and can determine the target object that matches the target item, that is, determine the object carrying the target item among the plurality of objects , and then the detection result can be generated and sent to the access control system when the target object carries the target item through the passage, so that the access control system can be closed after both the target object and the target item have passed, reducing the probability of failure and accidental injury.
- the processing component may perform frame-by-frame detection on the detected video, or frame sampling detection.
- a frame sampling detection method can be selected. For example, frame sampling and detection can be performed on the detected video every 2 seconds. The present disclosure does not limit the time interval of frame sampling.
- a video frame may be detected by a neural network, eg, a single-stage object detection network (eg, RetinaNet) may be used to detect objects in a video frame and target items, eg, an object in a video frame may be identified and a target item of a specific category, and determine the first position information of the object and the second position information of the target item.
- a neural network eg, a single-stage object detection network (eg, RetinaNet) may be used to detect objects in a video frame and target items, eg, an object in a video frame may be identified and a target item of a specific category, and determine the first position information of the object and the second position information of the target item.
- a neural network eg, a single-stage object detection network (eg, RetinaNet) may be used to detect objects in a video frame and target items, eg, an object in a video frame may be identified and a target item of a specific category, and determine the first position
- FIG. 3 shows a schematic diagram of a target detection network according to an embodiment of the present disclosure.
- the target detection network may include a feature extraction network, and a multi-level feature extraction network may be used to extract video frames.
- Feature extraction for example, the feature extraction network can be the MobileNetV2 feature extraction network, which is a lightweight multi-level feature extraction network, and the feature extraction network can also adopt other types, and the present disclosure does not limit the type of the feature extraction network.
- each level of the feature pyramid may include 4 levels, and each level may include 3 feature anchors. There is no limit to the number of anchor points.
- each level of the feature pyramid may include two sub-networks, one sub-network is a category sub-network, which is used to identify the category of each object in the video frame, for example, each object in the video frame can be identified, and a specific category of target items; another sub-network is the location sub-network, which can determine the position of the object and the target item, and can generate a selection box that selects the object and the target item.
- the position information of the selection box of the frame selection object is the first position information
- the position information of the selection frame of the frame selection target item is the second position information.
- the first position information may include position information of a first selection box that selects the head and shoulders part of the object and position information of a second selection box that selects the body part of the object, wherein the first selection box
- the position information of a selection frame can be used to track the motion trajectory of each object.
- Step S11 may include: performing detection processing on the video frame of the detected video to obtain position information of the first selection frame of the head and shoulders of the frame selection object.
- the position information of the object may be represented by the position information of the first selection box. The position of each object can be tracked by using the position information of the first selection frame.
- the position information of the second selection box may be used for matching with the second position information of the target item to determine a target object matching the target item, that is, to select an object carrying the target item from a plurality of objects.
- one or more objects may be detected in a certain video frame, and some or all of the objects may carry the target item.
- a target object carrying the target item may be determined to generate a detection result indicating that the object is carrying the target item (eg, luggage, stroller, etc.) when the target object is ready to pass through the access control system of the passageway.
- the target object, the access control system can receive the detection result, and close the door after both the target object and the target item have passed the access control system, so as to reduce the probability of failure to pass.
- the position information of the second selection box of the body part of the object may be matched with the second position information of the target item.
- the second position information may include position information of the third selection frame of the frame-selected target item, and step S12 may include: the intersection ratio between the second selection frame of the first object and the third selection frame is greater than or equal to the second threshold
- the first object is determined as a target object matching the target item, wherein the first object is any object.
- the target object that matches the target item may be determined by solving the intersection ratio of the third selection box with each object in the video frame.
- a subject can carry large items such as luggage, strollers, etc., which can be placed on the ground in close proximity to the body part of the carrier, ie, beside them. Therefore, the second selection box for framing the body part of the carrier and the third selection box for framing the target item may overlap in the video frame.
- the third selection box for selecting the target item and the second selection box for selecting other people's body parts usually do not overlap, or overlap. Portions are smaller. Therefore, the target object matching the target item can be determined by the overlapping portion between the carrier's second selection frame and the third selection frame.
- the size of the overlapping portion may be represented by the intersection ratio between the second selection box and the third selection box. That is, when the intersection ratio between the second selection box and the third selection frame is large, the overlapping part is large; when the intersection ratio between the second selection frame and the third selection frame is small, the overlapping part is small, and the second selection frame and the third selection frame overlap. When the intersection ratio of the selection boxes is 0, the second selection box does not overlap with the third selection box.
- a second threshold value of the intersection ratio can be set, and a second selection frame whose intersection ratio with the third selection frame is greater than or equal to the second threshold can be determined in the second selection frame of one or more objects in the video frame , and the selection frame is determined as the second selection frame of the target object, that is, the body part selected by the second selection frame is the body part of the target object, and the target object is the object matching the target item.
- the target objects matched with each target item detected in the video frame can be determined in the above manner.
- the target object matching the target item can be determined through the intersection ratio between the second selection box of the body part and the third selection box of the target object, that is, the target object can be accurately determined
- the carrier can generate detection results when the carrier passes through the access control system, reducing the probability of failure.
- the trajectory tracking of each object can be performed, that is, Determine the position information of the same object and/or the target item in multiple video frames, and verify and pass when the object without the target item passes the access control system, and verify and pass the target object when the target object with the target item passes the access control system. Generate detection results to prompt the access control system to close after both the target object and the target item have passed.
- the detection video may include multiple objects and multiple target items, and the movement trajectories of the multiple objects and the target items may be tracked respectively.
- a multi-object tracking method of bipartite graph matching and Kalman filtering can be employed to simultaneously track the positions of multiple objects and target items.
- adjacent video frames that can be processed by the processing component if the processing component processes the detected video frame by frame, the processed adjacent video frames are the adjacent video frames of the detected video; if the processing component performs frame extraction processing on the detected video , then the processed adjacent video frames are the extracted adjacent video frames) to determine the position information of the object and/or the target item, that is, select the frame, and perform the matching of the object and/or the target item, that is, determine the phase information
- the selection boxes in adjacent video frames select the same object and/or target item.
- the same object and/or target item selected by the selection boxes in adjacent video frames can be determined by means of cross-combination ratio. For example, in a certain video frame, the selection box for selecting the target item is located in the area A , in adjacent video frames, the selection frame of a target item is located in area B, if the intersection ratio between area A and area B reaches a preset threshold, the middle frame of two adjacent video frames can be determined The same target item is selected.
- the position of each target item and/or object in the next video frame can be predicted, for example, each target can be predicted by a Kalman filtering algorithm.
- the position of the item and/or object in the following video frame, the predicted position can also be matched with the actual position of the target item and/or object detected in the following video frame, for example, if the predicted position intersects the actual position. If the intersection ratio is greater than or equal to the prediction threshold, it can be determined that the same object and/or target item are framed in two adjacent video frames. If the intersection ratio does not reach the prediction threshold, it may not be the same object and/or the frame selected. Or target items, which may be caused by an object leaving the queued queue. One or more target items and/or objects in the video can be tracked and detected in the manner described above.
- the position and motion trajectory of each object can be determined through the above method, and verification information of the access control system is received when an object reaches a preset position (for example, before the access control system).
- a preset position for example, before the access control system.
- whether or not the object carries the target item it needs to be verified when passing through the access control system. If the verification fails, the gate will not be opened to prevent the passage of objects that fail the verification.
- the method further includes: in the case that the target position information indicates that the target object is located at a preset position, receiving the verification information of the access control system; wherein, generating at least according to the target position information of the target object
- the detection result includes: when the verification information is verified as passed, generating the detection result. That is, if it is determined that the target object reaches the preset position, the access control system also verifies the target object. If the verification passes, that is, the target object can be released. In this case, the detection result may be generated to prompt the access control system to be closed after both the target object and the target item have passed.
- the first position information includes the position information of the first selection frame of the head and shoulders of the frame selection object, that is, if the position of the first selection frame is within the preset position (for example, the preset area in front of the access control system), or if it is different from the preset position. If the position has more overlapping areas, it can be determined that the object reaches the preset area.
- the target position information is the first position information of the target object, including the position information of the first selection box that selects the head and shoulders of the target object
- the preset position includes the preset position outside the access control system of the passageway.
- Setting an area, when the target position information indicates that the target object is located at a preset position, receiving the verification information of the access control system includes: determining the position information of the first selection frame of multiple objects in the video frame; When the first selection frame is located in the preset area, or the intersection ratio of the first selection frame and the preset area is greater than or equal to a first threshold, the verification information of the access control system is received.
- the position information of multiple objects can be tracked at the same time, and the verification information of the access control system is received when it is determined that the object reaches the target position, and the object can be the target object carrying the target item.
- the first selection box is within the preset area outside the access control system, or when there is a lot of overlap with the preset area (intersection and comparison ratio) greater than or equal to the first threshold)
- the verification of the access control system can be received. If the verification passes, a test result can be generated.
- the method further comprises: generating a passing record when the object passes through the passing passage.
- the access control system eg, a gate
- the processing component can generate a pass record, eg, the characteristics of the object that can be passed through
- the information is numbered (for example, a code corresponding to the subject's head and shoulder feature information is generated), and the feature information and the code are stored in a database for easy query.
- a passing record may also be generated based on the feature information and passing time of the target object, and stored in a database, and the present disclosure does not limit the content contained in the passing record.
- the processing component may be connected to a storage component, and the database may be provided in the storage component for saving the passing records.
- the access control system can be closed after both the target object and the target item have passed, and can be generated after both the target object and the target item have passed. through the record.
- generating a passage record includes: when the target object and a target item matching the target object pass through the passage passage, generating a passage record. That is, the processing component may further determine that both the target object and the target item pass through the access control system of the passage, that is, after there is no passage failure, and then generate a pass record.
- the above describes the situation of passing the verification, but there may also be a situation that the verification fails. If an object fails the verification and still forcibly breaks through the access control system, or forcibly breaks through without verification Access control systems can be considered violations.
- the processing component may identify and record other predetermined behaviors. For example, if an object is identified as carrying a dangerous item, the behavior can be considered a predetermined behavior and the behavior recorded.
- the system further includes a storage component for storing video clips of the detection video, and the method further includes: when the first position information indicates that the object is located at a preset position, performing motion recognition processing on the object , determine whether the object performs a predetermined behavior; if the object performs a predetermined behavior, save the video segment in the detection video that records the predetermined behavior.
- the access control system can verify the object, and if the object has passed through the access control system without verification or has passed through the access control system without passing the verification, the processing component can determine the behavior for the intended behavior (violation).
- the predetermined behavior can be determined through the action recognition network. For example, if the verification fails, if the behavior of the object passing through the access control system is detected, the behavior is determined as the predetermined behavior.
- the predetermined behavior may include flipping through the gates of the access control system or drilling through the gates of the access control system, etc. The present disclosure does not limit the types of the predetermined behaviors.
- FIG. 4 shows a schematic diagram of an action recognition network according to an embodiment of the present disclosure.
- the motion recognition network can perform recognition processing on the detected video frame by frame, and can also perform recognition processing on the sampled video frames after sampling the detected video frames.
- the present disclosure does not limit the number of video frames processed by the motion recognition network.
- frame extraction can be performed on the detected video.
- frame extraction can be performed on the video between the moment when it is determined that the object fails to pass the verification and the moment when the object passes through the access control system.
- 8 video frames may be extracted, or one video frame may be extracted every 2 seconds, and the manner of extracting frames is not limited in the present disclosure.
- the action recognition network may perform motion recognition processing on the extracted video frames, for example, feature extraction processing may be performed on multiple extracted video frames, for example, feature extraction processing may be performed through the MobileNetV2 feature extraction network to obtain each video frame. feature map. Further, spatiotemporal modeling can be performed on the feature map of each video frame through an action recognition network to determine the action type of the object in the video frame. If the action type is determined to be a violation, it may be determined that the subject is engaged in a violation (eg, an act of breaking through a door).
- a violation e.g, an act of breaking through a door
- the processing component may save a video clip in which the predetermined action occurs.
- a video clip in which the predetermined action occurs.
- it can be stored in the storage component described above.
- the video frames extracted during the motion recognition process may be saved, or the video clips between the moment when the object verification failed and the moment when the object passes through the access control system may be saved. Forcibly breaking through the access control system, a video clip between the moment when the predetermined behavior of the object is detected and the moment when the object breaks through the access control system can be saved.
- violations can be detected when objects pass through the access control system, and video records of the violations can be saved to facilitate investigation of violations and provide a basis for access control management.
- FIG. 5 shows a schematic diagram of the application of the method for detecting a passage according to an embodiment of the present disclosure. As shown in FIG. 5 , there are multiple objects queuing up in front of the access control system of the passage, waiting to pass through the passage, and some objects carry luggage, etc. target item.
- the traffic detection method provided by the embodiment of the present application may include the following steps:
- Step 1 Obtain the detection video.
- the processing component may use a camera to capture a detection video at the passageway, and the captured detection video may include multiple objects queuing to pass through the access control system.
- Step 2 Identify and detect objects in the video.
- the processing component can perform detection processing on the video frame of the detected video, and determine the target item and object of the video frame. Further, a target object matching the target item can be selected from a plurality of objects through the intersection ratio between the selection frame of the frame-selected target item and the selection frame of the target body part, that is, the target carrying the target item is selected. object.
- Step 3 Track the position of the object.
- the processing component may analyze and detect multiple video frames of the video, and track the motion trajectory of each object, that is, obtain the position of each object in real time.
- Step 4 When it is determined that the object reaches the preset position, the verification information of the object is obtained.
- the processing component may receive verification information for the target object sent by the access control system when the selection box of the head and shoulders of an object reaches a preset position before the access control system.
- Step 5 Based on the verification information, determine whether the object has passed the verification.
- the processing component can determine whether the current object has passed the verification according to the verification information.
- steps 6 to 8 may be performed. If the current object is verified, steps 9 to 10 may be performed.
- Step 6 If the verification information indicates that the verification of the object is passed, determine whether the object is the target object carrying the target item.
- step 7 if the current object is the target object carrying the target item, step 7 is performed. Otherwise, go to step 8.
- Step 7 If the object is a target object carrying the target item, a detection result is generated to prompt the access control system to be closed after both the target object and the target item have passed.
- a detection result can be generated.
- the processing component can send the detection result to the access control system, and issue a security warning through the access control system, so that the access control system is closed after both the target object and the target item have passed.
- the processing component may save the passing record of the object.
- Step 8 If the object is not the target object carrying the target item, notify the access control system to open the door.
- the processing component can notify the access control system to open the door normally.
- the processing component may save the passing record of the object.
- Step 9 If the verification information indicates that the verification of the object fails, then identify whether the object has a violation.
- Step 10 If the object has a violation, notify the access control system to output alarm information, and record the violation.
- the processing component can detect the violation of the object and can cause the violation to occur while the video clip is being saved.
- a detection result can be generated when it is detected that an object to be passed carries a target item.
- the detection result can be used to instruct the access control system to close the access control system after the object and the target items carried by it have passed through the passageway, which can reduce the failure or accidental injury of the object carrying the target item.
- predetermined behaviors such as illegal behaviors
- save the video records of the predetermined behaviors so as to facilitate the investigation of the predetermined behaviors and provide a basis for access control management.
- the method obtains the detection video at the passageway through the image acquisition component, and processes the detection video to generate the detection result, and does not rely too much on the sensors of the access control system, which can reduce the number of sensors at the access control system and reduce the occupation of sensors. space, and can improve the accuracy of judging the passing state of the object to be passed.
- the electronic device of the embodiment of the present disclosure may be integrated into an access control system.
- an access control system For example, in subways, airports, government agencies and other scenarios that need to verify their identity before passing through, when the object passing through the access control carries luggage and other items, the access control can be closed after both the object and the item have passed the access control to reduce the occurrence of failures and accidental injuries. The probability. It can also save video clips when the violation occurs, such as the object passing through the access control system, for use in the management of the access control system.
- the present disclosure does not limit the application field of the passing detection method.
- FIG. 7 shows a block diagram of a passage detection apparatus according to an embodiment of the present disclosure.
- the apparatus includes an image acquisition component 11 and a processing component 12.
- the image acquisition component 11 is configured to acquire detection of a passageway video
- the processing component 12 is configured to perform detection processing on the video frame of the detected video, and obtain the first position information of the object to be passed in the video frame and the second position information of the target item; according to the first position information and The second position information is to match the object to be passed with the target item to determine the target object with the target item; generate a detection result at least according to the target position information of the target object, and send the detection result Access control system to said passageway.
- the processing component 12 is further configured to receive verification information of the access control system when the target position information indicates that the target object is located at a preset position; wherein at least Generating a detection result according to the target position information of the target object includes: generating the detection result when the verification information is verified as passed.
- the first position information includes the position information of the first selection frame of the head and shoulders of the frame selection object, wherein the processing component 12 is specifically configured to: perform the detection on the video frame of the video. In the detection process, the position information of the first selection frame of the head and shoulders part of the frame selection object is obtained.
- the preset position includes a preset area outside the access control system of the passageway
- the processing component 12 is specifically configured to determine the position information of the first selection frame of multiple objects in the video frame; the first selection frame of the target object is located in the preset area , or when the intersection ratio of the first selection box and the preset area is greater than or equal to the first threshold, the verification information of the access control system is received.
- the apparatus further includes a storage component
- the processing component 12 is further configured to, when the first location information indicates that the object is located at a preset location, perform a Action recognition processing is performed to determine whether the object performs a predetermined behavior; if the object performs a predetermined behavior, the video clip recording the predetermined behavior in the detection video is saved to the storage component.
- the first position information includes position information of a second selection box for frame selection of the body part of the object, and the second position information includes the position information of a third selection frame for frame selection of the target item.
- the location information, the processing component 12, is further configured to determine that the first object is a The target object matched by the target item, wherein the first object is any object.
- the processing component 12 is further configured to generate a passing record when the object passes through the passing channel.
- the processing component 12 is specifically configured to generate a passing record when the target object and the target item matching the target object pass through the passageway.
- the present disclosure also provides a passage detection device, electronic equipment, computer-readable storage medium, and program, all of which can be used to implement any passage detection method provided by the present disclosure.
- a passage detection device electronic equipment, computer-readable storage medium, and program, all of which can be used to implement any passage detection method provided by the present disclosure.
- the writing order of each step does not mean a strict execution order but constitutes any limitation on the implementation process, and the specific execution order of each step should be based on its function and possible Internal logic is determined.
- the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
- the functions or modules included in the apparatuses provided in the embodiments of the present disclosure may be used to execute the methods described in the above method embodiments.
- Embodiments of the present disclosure further provide a computer-readable storage medium, on which computer program instructions are stored, and when the computer program instructions are executed by a processor, the foregoing method is implemented.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing instructions executable by the processor; wherein the processor is configured to perform the above method.
- the electronic device may be provided as a terminal, server or other form of device.
- FIG. 8 is a block diagram of an electronic device 800 according to an exemplary embodiment.
- electronic device 800 may be a mobile phone, computer, digital broadcast terminal, messaging device, game console, tablet device, medical device, fitness device, personal digital assistant, etc. terminal.
- an electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power supply component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, a sensor component 814 , and the communication component 816 .
- the processing component 802 generally controls the overall operation of the electronic device 800, such as operations associated with display, phone calls, data communications, camera operations, and recording operations.
- the processing component 802 can include one or more processors 820 to execute instructions to perform all or some of the steps of the methods described above.
- processing component 802 may include one or more modules that facilitate interaction between processing component 802 and other components.
- processing component 802 may include a multimedia module to facilitate interaction between multimedia component 808 and processing component 802.
- Memory 804 is configured to store various types of data to support operation at electronic device 800 . Examples of such data include instructions for any application or method operating on electronic device 800, contact data, phonebook data, messages, pictures, videos, and the like. Memory 804 may be implemented by any type of volatile or nonvolatile storage device or combination thereof, such as static random access memory (SRAM), electrically erasable programmable read only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic or Optical Disk Magnetic Disk
- Power supply assembly 806 provides power to various components of electronic device 800 .
- Power supply components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power to electronic device 800 .
- Multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from a user.
- the touch panel includes one or more touch sensors to sense touch, swipe, and gestures on the touch panel. The touch sensor may not only sense the boundaries of a touch or swipe action, but also detect the duration and pressure associated with the touch or swipe action.
- the multimedia component 808 includes a front-facing camera and/or a rear-facing camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera may receive external multimedia data. Each of the front and rear cameras can be a fixed optical lens system or have focal length and optical zoom capability.
- Audio component 810 is configured to output and/or input audio signals.
- audio component 810 includes a microphone (MIC) that is configured to receive external audio signals when electronic device 800 is in operating modes, such as calling mode, recording mode, and voice recognition mode.
- the received audio signal may be further stored in memory 804 or transmitted via communication component 816 .
- audio component 810 also includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module, which may be a keyboard, a click wheel, a button, or the like. These buttons may include, but are not limited to: home button, volume buttons, start button, and lock button.
- Sensor assembly 814 includes one or more sensors for providing status assessment of various aspects of electronic device 800 .
- the sensor assembly 814 can detect the on/off state of the electronic device 800, the relative positioning of the components, such as the display and the keypad of the electronic device 800, the sensor assembly 814 can also detect the electronic device 800 or one of the electronic device 800 Changes in the position of components, presence or absence of user contact with the electronic device 800 , orientation or acceleration/deceleration of the electronic device 800 and changes in the temperature of the electronic device 800 .
- Sensor assembly 814 may include a proximity sensor configured to detect the presence of nearby objects in the absence of any physical contact.
- Sensor assembly 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
- Communication component 816 is configured to facilitate wired or wireless communication between electronic device 800 and other devices.
- Electronic device 800 may access wireless networks based on communication standards, such as WiFi, 2G or 3G, or a combination thereof.
- the communication component 816 receives broadcast signals or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 also includes a near field communication (NFC) module to facilitate short-range communication.
- NFC near field communication
- the NFC module may be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- electronic device 800 may be implemented by one or more application specific integrated circuits (ASICs), digital signal processors (DSPs), digital signal processing devices (DSPDs), programmable logic devices (PLDs), field programmable A programmed gate array (FPGA), controller, microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
- ASICs application specific integrated circuits
- DSPs digital signal processors
- DSPDs digital signal processing devices
- PLDs programmable logic devices
- FPGA field programmable A programmed gate array
- controller microcontroller, microprocessor or other electronic component implementation is used to perform the above method.
- a non-volatile computer-readable storage medium such as a memory 804 comprising computer program instructions executable by the processor 820 of the electronic device 800 to perform the above method is also provided.
- FIG. 9 is a block diagram of an electronic device 1900 according to an exemplary embodiment.
- the electronic device 1900 may be provided as a server.
- electronic device 1900 includes processing component 1922, which further includes one or more processors, and a memory resource represented by memory 1932 for storing instructions executable by processing component 1922, such as applications.
- An application program stored in memory 1932 may include one or more modules, each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply assembly 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- Electronic device 1900 may operate based on an operating system stored in memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium such as memory 1932 comprising computer program instructions executable by processing component 1922 of electronic device 1900 to perform the above-described method.
- An embodiment of the present disclosure further provides a computer program, including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the method configured to implement the above-mentioned embodiment of the pass-through detection method. step.
- the present disclosure may be a system, method and/or computer program product.
- the computer program product may include a computer-readable storage medium having computer-readable program instructions loaded thereon for causing a processor to implement various aspects of the present disclosure.
- a computer-readable storage medium may be a tangible device that can hold and store instructions for use by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Non-exhaustive list of computer readable storage media include: portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM) or flash memory), static random access memory (SRAM), portable compact disk read only memory (CD-ROM), digital versatile disk (DVD), memory sticks, floppy disks, mechanically coded devices, such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
- RAM random access memory
- ROM read only memory
- EPROM erasable programmable read only memory
- flash memory static random access memory
- SRAM static random access memory
- CD-ROM compact disk read only memory
- DVD digital versatile disk
- memory sticks floppy disks
- mechanically coded devices such as printers with instructions stored thereon Hole cards or raised structures in grooves, and any suitable combination of the above.
- Computer-readable storage media are not to be construed as transient signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (eg, light pulses through fiber optic cables), or through electrical wires transmitted electrical signals.
- the computer readable program instructions described herein may be downloaded to various computing/processing devices from a computer readable storage medium, or to an external computer or external storage device over a network such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- a network adapter card or network interface in each computing/processing device receives computer-readable program instructions from a network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in each computing/processing device .
- Computer program instructions for carrying out operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-dependent instructions, microcode, firmware instructions, state setting data, or instructions in one or more programming languages.
- Source or object code written in any combination, including object-oriented programming languages, such as Smalltalk, C++, etc., and conventional procedural programming languages, such as the "C" language or similar programming languages.
- the computer readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer, or entirely on the remote computer or server implement.
- the remote computer may be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or may be connected to an external computer (eg, using an Internet service provider through the Internet connect).
- LAN local area network
- WAN wide area network
- custom electronic circuits such as programmable logic circuits, field programmable gate arrays (FPGAs), or programmable logic arrays (PLAs) can be personalized by utilizing state information of computer readable program instructions.
- Computer readable program instructions are executed to implement various aspects of the present disclosure.
- These computer readable program instructions may be provided to a processor of a general purpose computer, special purpose computer or other programmable data processing apparatus to produce a machine that causes the instructions when executed by the processor of the computer or other programmable data processing apparatus , resulting in means for implementing the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- These computer readable program instructions can also be stored in a computer readable storage medium, these instructions cause a computer, programmable data processing apparatus and/or other equipment to operate in a specific manner, so that the computer readable medium on which the instructions are stored includes An article of manufacture comprising instructions for implementing various aspects of the functions/acts specified in one or more blocks of the flowchart and/or block diagrams.
- Computer readable program instructions can also be loaded onto a computer, other programmable data processing apparatus, or other equipment to cause a series of operational steps to be performed on the computer, other programmable data processing apparatus, or other equipment to produce a computer-implemented process , thereby causing instructions executing on a computer, other programmable data processing apparatus, or other device to implement the functions/acts specified in one or more blocks of the flowcharts and/or block diagrams.
- each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more functions for implementing the specified logical function(s) executable instructions.
- the functions noted in the blocks may occur out of the order noted in the figures. For example, two blocks in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved.
- each block of the block diagrams and/or flowchart illustrations, and combinations of blocks in the block diagrams and/or flowchart illustrations can be implemented in dedicated hardware-based systems that perform the specified functions or actions , or can be implemented in a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Human Computer Interaction (AREA)
- Image Analysis (AREA)
- Time Recorders, Dirve Recorders, Access Control (AREA)
- Closed-Circuit Television Systems (AREA)
- Alarm Systems (AREA)
Abstract
Description
Claims (18)
- 一种通行检测方法,包括:对检测视频的视频帧进行检测处理,获得视频帧中待通行的对象的第一位置信息,以及目标物品的第二位置信息,所述检测视频为在通行通道处拍摄的视频;根据第一位置信息和第二位置信息,在所述待通行的对象中,确定与所述目标物品匹配的目标对象;至少根据所述目标对象的目标位置信息生成检测结果,并将所述检测结果发送至所述通行通道的门禁系统。
- 根据权利要求1所述的方法,其中,所述方法还包括:在所述目标位置信息指示所述目标对象位于预设位置的情况下,接收所述门禁系统的验证信息;其中,至少根据所述目标对象的目标位置信息生成检测结果,包括:在所述验证信息为验证通过的情况下,生成所述检测结果。
- 根据权利要求2所述的方法,其中,所述第一位置信息包括框选对象的头肩部位的第一选择框的位置信息,其中,所述对检测视频的视频帧进行检测处理,获得视频帧中待通行的对象的第一位置信息,包括:对检测视频的视频帧进行检测处理,获得框选对象的头肩部位的第一选择框的位置信息。
- 根据权利要求3所述的方法,其中,所述预设位置包括所述通行通道的门禁系统外的预设区域,在所述目标位置信息指示所述目标对象位于预设位置的情况下,接收所述门禁系统的验证信息,包括:确定视频帧中多个对象的第一选择框的位置信息;在所述目标对象的第一选择框位于所述预设区域内,或第一选择框与所述预设区域的交并比大于或等于第一阈值的情况下,接收所述门禁系统的验证信息。
- 根据权利要求1所述的方法,其中,所述方法还包括:在所述第一位置信息指示所述对象位于预设位置的情况下,对所述对象进行动作识别处理,确定所述对象是否进行预定行为;在所述对象进行预定行为的情况下,保存所述检测视频中记录所述预定行为的视频片段。
- 根据权利要求1所述的方法,其中,所述第一位置信息包括框选所述对象的身体部位的第二选择框的位置信息,所述第二位置信息包括框选目标物品的第三选择框的位置信息,根据第一位置信息和第二位置信息,对所述待通行的对象与所述目标物品进行匹配,确定出具有目标物品的目标对象,包括:在第一对象的第二选择框和所述第三选择框的交并比大于或等于第二阈值的情况下,将所述第一对象确定为与所述目标物品匹配的目标对象,其中,所述第一对象为任一对象。
- 根据权利要求1所述的方法,其中,所述方法还包括:在所述对象通过所述通行通道的情况下,生成通过记录。
- 根据权利要求7所述的方法,其中,在所述对象通过所述通行通道的情况下,生成通过记录,包括:在所述目标对象以及与所述目标对象匹配的目标物品通过所述通行通道的情况下,生成通过记录。
- 一种通行检测装置,包括:图像获取组件和处理组件,所述图像获取组件,被配置为获取通行通道的检测视频;所述处理组件,被配置为对检测视频的视频帧进行检测处理,获得视频帧中待通行的对象的第一位置信息,以及目标物品的第二位置信息;根据第一位置信息和第二位置信息,对所述待通行的对象与所述目标物品进行匹配,确定出具有目标物品的目标对象;至少根据所述目标对象的目标位置信息生成检测结果,并将所述检测结果发送至所述通行通道的门禁系统。
- 根据权利要求9所述的装置,其中,所述处理组件,还被配置为在所述目标位置信息指示所述目标对象位于预设位置的情况下,接收所述门禁系统的验证信息;其中,至少根据所述目标对象的目标位置信息生成检测结果,包括:在所述验证信息为验证通过的情况下,生成所述检测结果。
- 根据权利要求10所述的装置,其中,所述第一位置信息包括框选对象的头肩部位的第一选择框的位置信息,所述处理组件,具体被配置为对检测视频的视频帧进行检测处理,获得框选对象的头肩部位的第一选择框的位置信息。
- 根据权利要求11所述的装置,其中,所述预设位置包括所述通行通道的门禁系统外的预设区域,所述处理组件,具体被配置为确定视频帧中多个对象的第一选择框的位置信息;在所述目标对象的第一选择框位于所述预设区域内,或第一选择框与所述预设区域的交并比大于或等于第一阈值的情况下,接收所述门禁系统的验证信息。
- 根据权利要求9所述的装置,其中,所述装置还包括存储组件;所述处理组件,还被配置为在所述第一位置信息指示所述对象位于预设位置的情况下,对所述对象进行动作识别处理,确定所述对象是否进行预定行为;在所述对象进行预定行为的情况下,将所述检测视频中 记录所述预定行为的视频片段保存至所述存储组件。
- 根据权利要求9所述的装置,其中,所述第一位置信息包括框选所述对象的身体部位的第二选择框的位置信息,所述第二位置信息包括框选目标物品的第三选择框的位置信息;处理组件,还被配置为在第一对象的第二选择框和所述第三选择框的交并比大于或等于第二阈值的情况下,将所述第一对象确定为与所述目标物品匹配的目标对象,其中,所述第一对象为任一对象。
- 根据权利要求9所述的装置,其中,所述处理组件,还被配置为在所述对象通过所述通行通道的情况下,生成通过记录。
- 根据权利要求15所述的装置,其中,所述处理组件,具体被配置为在所述目标对象以及与所述目标对象匹配的目标物品通过所述通行通道的情况下,生成通过记录。
- 一种电子设备,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为:执行权利要求1至8中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,所述计算机程序指令被处理器执行时实现权利要求1至8中任意一项所述的方法。
Priority Applications (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2022538311A JP2023514762A (ja) | 2021-01-28 | 2021-07-16 | 通行検出方法及びその装置、電子デバイス並びにコンピュータ可読記憶媒体 |
KR1020227018215A KR20220110743A (ko) | 2021-01-28 | 2021-07-16 | 통행 검출 방법 및 장치, 전자 기기 및 컴퓨터 판독 가능 저장 매체 |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202110118334.9 | 2021-01-28 | ||
CN202110118334.9A CN112837454A (zh) | 2021-01-28 | 2021-01-28 | 通行检测方法及装置、电子设备和存储介质 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022160616A1 true WO2022160616A1 (zh) | 2022-08-04 |
Family
ID=75932193
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/106907 WO2022160616A1 (zh) | 2021-01-28 | 2021-07-16 | 通行检测方法及装置、电子设备和计算机可读存储介质 |
Country Status (4)
Country | Link |
---|---|
JP (1) | JP2023514762A (zh) |
KR (1) | KR20220110743A (zh) |
CN (1) | CN112837454A (zh) |
WO (1) | WO2022160616A1 (zh) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117314077A (zh) * | 2023-09-26 | 2023-12-29 | 深圳市威盟盛科技有限公司 | 一种基于大数据的通道闸机数据监管方法及系统 |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112837454A (zh) * | 2021-01-28 | 2021-05-25 | 深圳市商汤科技有限公司 | 通行检测方法及装置、电子设备和存储介质 |
CN113464016B (zh) * | 2021-06-30 | 2023-04-07 | 重庆工业职业技术学院 | 监控控制系统 |
CN113656843B (zh) * | 2021-08-18 | 2022-08-12 | 北京百度网讯科技有限公司 | 信息验证方法、装置、设备和介质 |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004124497A (ja) * | 2002-10-02 | 2004-04-22 | Tokai Riken Kk | 本人確認と連れ込み防止の機能を備えた入退室管理システム |
CN111192391A (zh) * | 2018-10-25 | 2020-05-22 | 杭州海康威视数字技术股份有限公司 | 基于图像和/或视频的人行通道闸机控制方法和设备 |
CN111814612A (zh) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | 目标的脸部检测方法及其相关装置 |
CN112083678A (zh) * | 2020-08-25 | 2020-12-15 | 深圳市格灵人工智能与机器人研究院有限公司 | 闸机安全控制方法、装置、电子设备及存储介质 |
CN112837454A (zh) * | 2021-01-28 | 2021-05-25 | 深圳市商汤科技有限公司 | 通行检测方法及装置、电子设备和存储介质 |
Family Cites Families (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105095847A (zh) * | 2014-05-16 | 2015-11-25 | 北京天诚盛业科技有限公司 | 用于移动终端的虹膜识别方法和装置 |
CN106023372A (zh) * | 2016-05-23 | 2016-10-12 | 三峡大学 | 基于人脸识别的图书馆出入管理系统 |
CN106760967A (zh) * | 2017-01-14 | 2017-05-31 | 聚鑫智能科技(武汉)股份有限公司 | 一种无锁孔全自动智能门 |
CN107995471A (zh) * | 2017-12-25 | 2018-05-04 | 天津天地伟业电子工业制造有限公司 | 一种图像识别门禁摄像机 |
CN108416776B (zh) * | 2018-03-16 | 2021-04-30 | 京东方科技集团股份有限公司 | 图像识别方法、图像识别装置、计算机产品和可读存储介质 |
CN108428273A (zh) * | 2018-05-21 | 2018-08-21 | 唐旭 | 一种智能家居门禁系统及其控制方法 |
CN110765830B (zh) * | 2019-06-12 | 2022-11-04 | 天津新泰基业电子股份有限公司 | 一种人脸全自助注册方法、系统、介质及设备 |
CN211015680U (zh) * | 2020-03-11 | 2020-07-14 | 李晓坤 | 一种基于身份识别的无人体温验证终端 |
CN111723770B (zh) * | 2020-06-30 | 2020-12-18 | 四川兴事发门窗有限责任公司 | 基于图像识别的防尾随闸机系统及方法 |
CN112233289A (zh) * | 2020-09-04 | 2021-01-15 | 常州方可为机械科技有限公司 | 用于闸机门的智能驱动系统 |
-
2021
- 2021-01-28 CN CN202110118334.9A patent/CN112837454A/zh active Pending
- 2021-07-16 JP JP2022538311A patent/JP2023514762A/ja not_active Withdrawn
- 2021-07-16 KR KR1020227018215A patent/KR20220110743A/ko unknown
- 2021-07-16 WO PCT/CN2021/106907 patent/WO2022160616A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004124497A (ja) * | 2002-10-02 | 2004-04-22 | Tokai Riken Kk | 本人確認と連れ込み防止の機能を備えた入退室管理システム |
CN111192391A (zh) * | 2018-10-25 | 2020-05-22 | 杭州海康威视数字技术股份有限公司 | 基于图像和/或视频的人行通道闸机控制方法和设备 |
CN111814612A (zh) * | 2020-06-24 | 2020-10-23 | 浙江大华技术股份有限公司 | 目标的脸部检测方法及其相关装置 |
CN112083678A (zh) * | 2020-08-25 | 2020-12-15 | 深圳市格灵人工智能与机器人研究院有限公司 | 闸机安全控制方法、装置、电子设备及存储介质 |
CN112837454A (zh) * | 2021-01-28 | 2021-05-25 | 深圳市商汤科技有限公司 | 通行检测方法及装置、电子设备和存储介质 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117314077A (zh) * | 2023-09-26 | 2023-12-29 | 深圳市威盟盛科技有限公司 | 一种基于大数据的通道闸机数据监管方法及系统 |
Also Published As
Publication number | Publication date |
---|---|
KR20220110743A (ko) | 2022-08-09 |
CN112837454A (zh) | 2021-05-25 |
JP2023514762A (ja) | 2023-04-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2022160616A1 (zh) | 通行检测方法及装置、电子设备和计算机可读存储介质 | |
US11410001B2 (en) | Method and apparatus for object authentication using images, electronic device, and storage medium | |
EP3163498B1 (en) | Alarming method and device | |
WO2017000518A1 (zh) | 一种提供对象找回信息的方法和装置 | |
WO2022183661A1 (zh) | 事件检测方法、装置、电子设备、存储介质及程序产品 | |
US20170300503A1 (en) | Method and apparatus for managing video data, terminal, and server | |
US10425403B2 (en) | Method and device for accessing smart camera | |
WO2018058373A1 (zh) | 用于电子设备的控制方法、装置及电子设备 | |
US20170289181A1 (en) | Payment method, apparatus and medium | |
CN110675539B (zh) | 身份核验方法及装置、电子设备和存储介质 | |
JP2018510398A (ja) | 事象関連データ監視システム | |
US10157227B2 (en) | Apparatus and method for generating a summary image by image processing | |
WO2022099989A1 (zh) | 活体识别、门禁设备控制方法和装置、电子设备和存储介质、计算机程序 | |
WO2022160569A1 (zh) | 安检异常事件检测方法及装置、电子设备和存储介质 | |
WO2022134388A1 (zh) | 乘车逃票检测方法及装置、电子设备、存储介质、计算机程序产品 | |
CN109842612B (zh) | 基于图库模型的日志安全分析方法、装置及存储介质 | |
CN111274426A (zh) | 类别标注方法及装置、电子设备和存储介质 | |
WO2018228422A1 (zh) | 一种发出预警信息的方法、装置及系统 | |
TWI761843B (zh) | 門禁控制方法及裝置、電子設備和儲存介質 | |
TWI766458B (zh) | 資訊識別方法及裝置、電子設備、儲存媒體 | |
WO2022142330A1 (zh) | 一种身份认证方法及装置、电子设备和存储介质 | |
WO2022183663A1 (zh) | 事件检测方法、装置、电子设备、存储介质及程序产品 | |
CN110781842A (zh) | 图像处理方法及装置、电子设备和存储介质 | |
US10095911B2 (en) | Methods, devices, and computer-readable mediums for verifying a fingerprint | |
EP3729851A1 (en) | Method for detecting the possible taking of screenshots |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2022538311 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21922213 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 15/11/2023) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21922213 Country of ref document: EP Kind code of ref document: A1 |