CN112149475B - Luggage case verification method, device, system and storage medium - Google Patents

Luggage case verification method, device, system and storage medium Download PDF

Info

Publication number
CN112149475B
CN112149475B CN201910577854.9A CN201910577854A CN112149475B CN 112149475 B CN112149475 B CN 112149475B CN 201910577854 A CN201910577854 A CN 201910577854A CN 112149475 B CN112149475 B CN 112149475B
Authority
CN
China
Prior art keywords
image
pedestrian
luggage
pedestrians
trunk
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910577854.9A
Other languages
Chinese (zh)
Other versions
CN112149475A (en
Inventor
邓凡
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hangzhou Hikvision Digital Technology Co Ltd
Original Assignee
Hangzhou Hikvision Digital Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hangzhou Hikvision Digital Technology Co Ltd filed Critical Hangzhou Hikvision Digital Technology Co Ltd
Priority to CN201910577854.9A priority Critical patent/CN112149475B/en
Publication of CN112149475A publication Critical patent/CN112149475A/en
Application granted granted Critical
Publication of CN112149475B publication Critical patent/CN112149475B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/35Categorising the entire scene, e.g. birthday party or wedding scene
    • G06V20/36Indoor scenes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • G08B21/18Status alarms
    • G08B21/24Reminder alarms, e.g. anti-loss alarms

Landscapes

  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Health & Medical Sciences (AREA)
  • Business, Economics & Management (AREA)
  • Emergency Management (AREA)
  • Image Analysis (AREA)
  • Alarm Systems (AREA)
  • Traffic Control Systems (AREA)

Abstract

The application discloses a luggage case verification method, a luggage case verification device, a luggage case verification system and a storage medium, and belongs to the field of video monitoring. The method comprises the following steps: acquiring a first image which is acquired by a first camera and contains pedestrians; if the pedestrian in the first image carries the luggage, establishing a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian; acquiring a second image containing pedestrians acquired by a second camera; if the pedestrian in the second image carries the luggage, when the pedestrian in the second image is detected to be in error in taking the luggage based on the first association relation, first alarm information for indicating the error in taking the luggage is generated. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced.

Description

Luggage case verification method, device, system and storage medium
Technical Field
The application relates to the field of video monitoring, in particular to a luggage case verification method, a luggage case verification device, a luggage case verification system and a storage medium.
Background
In public areas such as airports, train stations, or bus stops, passengers often carry luggage. In order to facilitate passengers to take a vehicle such as an airplane, a train, or an automobile, the passengers need to carry a large-sized luggage box for carrying before departure.
After the passenger arrives at the destination, the passenger needs to retrieve the checked-in luggage. However, at present, when a passenger gets the checked luggage, only a manual verification mode is used to determine whether the checked luggage is the same as the luggage carried before the passenger, so that the phenomenon of error check of the luggage is very easy to occur.
Disclosure of Invention
The embodiment of the application provides a trunk verification method, a trunk verification device, a trunk verification system and a storage medium. Can solve the problem of the phenomenon that the trunk is wrong easily in the prior art, the technical scheme is as follows:
in a first aspect, there is provided a luggage verification method, the method comprising:
acquiring a first image which is acquired by a first camera and contains pedestrians;
If the pedestrian in the first image carries a luggage case, a first association relation between the pedestrian in the first image and the luggage case carried by the pedestrian is established;
Acquiring a second image containing pedestrians acquired by a second camera;
If the pedestrian in the second image carries the luggage, when the pedestrian in the second image is detected to be in error in taking the luggage based on the first association relation, first alarm information for indicating the error in taking the luggage is generated.
Optionally, after the second image including the pedestrian acquired by the second camera is acquired, if the pedestrian in the second image carries a trunk, the method further includes:
when an alternative pedestrian matched with the pedestrian in the second image is acquired in the first association relationship, determining an alternative luggage box corresponding to the alternative pedestrian based on the first association relationship;
Detecting whether a luggage case carried by a pedestrian in the second image matches the alternative luggage case;
And if the luggage case carried by the pedestrian in the second image is detected to be not matched with the alternative luggage case, determining that the pedestrian in the second image misleads the luggage case.
Optionally, after the second image including the pedestrian acquired by the second camera is acquired, if the pedestrian in the second image carries a trunk, the method further includes:
When the candidate luggage box matched with the luggage box carried by the pedestrian in the second image is acquired in the first association relation, determining the candidate pedestrian corresponding to the candidate luggage box based on the first association relation;
Detecting whether a pedestrian in the second image matches the candidate pedestrian;
And if the pedestrian in the second image is detected to be not matched with the alternative pedestrian, determining that the pedestrian in the second image misleads the trunk.
Optionally, after the acquiring the second image including the pedestrian acquired by the second camera, the method further includes:
And if the pedestrian in the second image does not carry the luggage, generating second alarm information for indicating the missing collar of the luggage when detecting that the pedestrian in the second image missing the collar of the luggage based on the first association relation.
Optionally, after the second image including the pedestrian acquired by the second camera is acquired, if the pedestrian in the second image does not carry the trunk, the method further includes:
And when the candidate pedestrian matched with the pedestrian in the second image is acquired in the first association relation, determining that the pedestrian in the second image leaks from the trunk.
Optionally, after the acquiring the first image including the pedestrian acquired by the first camera, the method further includes:
If the pedestrian in the first image does not carry the luggage, establishing a second association relationship between the pedestrian in the first image and an indication tag, wherein the indication tag is used for indicating that the pedestrian in the first image does not carry the luggage;
After the acquiring the second image including the pedestrian acquired by the second camera, the method further includes:
If the pedestrian in the second image carries the luggage, when the pedestrian in the second image is detected to be in error in the luggage based on the second association relation, third alarm information for indicating the error in the luggage is generated.
Optionally, after the second image including the pedestrian acquired by the second camera is acquired, if the pedestrian in the second image carries a trunk, the method further includes:
And when the alternative pedestrians matched with the pedestrians in the second image are acquired in the second association relation, determining that the pedestrians in the second image are wrong in the trunk.
Optionally, establishing a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian includes:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
and establishing the first association relation based on the first image and identity information of pedestrians in the first image after association processing.
Optionally, the generating the first alarm information for indicating the error of the trunk includes:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
generating the first alarm information based on the identity information of the candidate pedestrian;
The generating the second alarm information for indicating the missing collar of the luggage case comprises the following steps:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
and generating the second alarm information based on the identity information of the candidate pedestrian.
Optionally, establishing a second association relationship between the pedestrian in the first image and the indication tag includes:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
And establishing the second association relation based on the first image and the identity information of the pedestrians in the first image after the association processing.
Optionally, the generating the third alarm information for indicating the error of the trunk includes:
Acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation;
And generating the third alarm information based on the identity information of the candidate pedestrian.
In a second aspect, there is provided a luggage verification device, the device comprising:
The first acquisition module is used for acquiring a first image which is acquired by the first camera and contains pedestrians;
The first establishing module is used for establishing a first association relation between the pedestrian in the first image and the luggage carried by the pedestrian if the pedestrian in the first image carries the luggage;
the second acquisition module is used for acquiring a second image which is acquired by a second camera and contains pedestrians;
the first generation module is used for generating first alarm information for indicating that the luggage is misled when detecting that the pedestrian in the second image is misled on the luggage based on the first association relation if the pedestrian in the second image carries the luggage.
Optionally, the apparatus further includes:
The first determining module is used for determining an alternative luggage box corresponding to the alternative pedestrian based on the first association relation when the alternative pedestrian matched with the pedestrian in the second image is obtained in the first association relation if the pedestrian in the second image carries the luggage box;
a first detection module for detecting whether a luggage case carried by a pedestrian in the second image matches the alternative luggage case;
And the second determining module is used for determining that the pedestrian in the second image misties the trunk if the trunk carried by the pedestrian in the second image is detected to be not matched with the alternative trunk.
Optionally, the apparatus further includes:
The third determining module is used for determining an alternative pedestrian corresponding to the alternative luggage case based on the first association relation when the alternative luggage case matched with the luggage case carried by the pedestrian in the second image is acquired in the first association relation if the pedestrian in the second image carries the luggage case;
The second detection module is used for detecting whether the pedestrian in the second image is matched with the alternative pedestrian or not;
And the fourth determining module is used for determining that the pedestrian in the second image is wrong in the trunk if the pedestrian in the second image is not matched with the alternative pedestrian.
Optionally, the apparatus further comprises:
and the second generation module is used for generating second alarm information for indicating the missing collar of the luggage case when detecting that the pedestrian in the second image missing the luggage case based on the first association relation if the pedestrian in the second image does not carry the luggage case.
Optionally, the apparatus further includes:
and a fifth determining module, configured to determine that, if the pedestrian in the second image does not carry the luggage, the pedestrian in the second image missed the luggage when an alternative pedestrian matching the pedestrian in the second image is obtained in the first association relationship.
Optionally, the apparatus further includes:
The second establishing module is used for establishing a second association relation between the pedestrians in the first image and the indication tag if the pedestrians in the first image do not carry the luggage, and the indication tag is used for indicating that the pedestrians in the first image do not carry the luggage;
And the third generation module is used for generating third alarm information for indicating that the luggage is misled when detecting that the pedestrian in the second image is misled in the luggage based on the second association relation if the pedestrian in the second image carries the luggage.
Optionally, the apparatus further includes:
And a sixth determining module, configured to determine that, if the pedestrian in the second image carries a trunk, the pedestrian in the second image miscaptures the trunk when an alternative pedestrian matching the pedestrian in the second image is obtained in the second association relationship.
Optionally, the first establishing module is configured to:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
and establishing the first association relation based on the first image and identity information of pedestrians in the first image after association processing.
Optionally, the first generating module is configured to:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
generating the first alarm information based on the identity information of the candidate pedestrian;
The second generating module is configured to:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
and generating the second alarm information based on the identity information of the candidate pedestrian.
Optionally, the second establishing module is configured to:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
And establishing the second association relation based on the first image and the identity information of the pedestrians in the first image after the association processing.
Optionally, the third generating module is configured to:
Acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation;
And generating the third alarm information based on the identity information of the candidate pedestrian.
In a third aspect, there is provided a luggage detection system comprising: a first camera, a second camera, and a verification server;
the first camera is used for collecting a first image containing pedestrians and sending the first image to the verification server;
the verification server is used for acquiring a first image sent by the first camera;
The verification server is used for establishing a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian if the pedestrian in the first image carries the luggage;
the second camera is used for acquiring a second image containing pedestrians and sending the second image to the verification server;
The verification server is used for acquiring a second image sent by the second camera;
The verification server is used for generating first alarm information for indicating that the luggage is in error when detecting that the pedestrian in the second image takes the luggage wrong based on the first association relation if the pedestrian in the second image carries the luggage.
In a fourth aspect, there is provided a computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the luggage verification method of any of the first aspects.
The technical scheme provided by the embodiment of the application has the beneficial effects that at least:
When a pedestrian enters a station, the verification server can acquire a first image through the first camera, and if the pedestrian in the first image carries a luggage case, the verification server needs to establish a first association relation between the pedestrian in the first image and the luggage case carried by the pedestrian. When a pedestrian goes out, the verification server can acquire a second image through the second camera, if the pedestrian in the second image carries a trunk, the verification server can determine whether the pedestrian in the second image is wrong in the trunk or not based on the first association relation, and after detecting that the pedestrian in the second image is wrong in the trunk, the verification server generates first alarm information for indicating the trunk is wrong in the trunk. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings required for the description of the embodiments will be briefly described below, and it is apparent that the drawings in the following description are only some embodiments of the present application, and other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a schematic structural diagram of a luggage verification system according to an embodiment of the present application;
FIG. 2 is a flow chart of a method for trunk authentication provided by an embodiment of the present application;
FIG. 3 is a flow chart of another method of trunk authentication provided by an embodiment of the present application;
FIG. 4 is an effect diagram of a first image processed by an object detection algorithm according to an embodiment of the present application;
FIG. 5 is a flowchart of a method for establishing a first association according to an embodiment of the present application;
FIG. 6 is a flowchart of a method for establishing a second association according to an embodiment of the present application;
FIG. 7 is a block diagram of a luggage verification device provided by an embodiment of the present application;
fig. 8 is a block diagram of another luggage verification device provided by an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the embodiments of the present application will be described in further detail with reference to the accompanying drawings.
Referring to fig. 1, fig. 1 is a schematic structural diagram of a luggage verification system according to an embodiment of the present application. The luggage verification system 100 may include: an authentication server 101, a first camera 102, and a second camera 103.
The verification server 102 may be a server, a server cluster comprising several servers, or a cloud computing service center.
The first camera 102 may be a snapshot camera, or a monitoring camera, or a camera cluster consisting of several snapshot cameras and monitoring cameras. In an embodiment of the present application, the first camera 102 may be deployed at a departure entrance in a public area such as an airport, a railway station, or a bus station.
The second camera 103 may be a snapshot camera or a monitoring camera or a camera cluster consisting of several snapshot cameras and monitoring cameras. In an embodiment of the present application, the second camera 103 may be deployed at a departure gate in a public area such as an airport, a railway station, or a bus station.
The first camera 102 establishes a communication connection with the authentication server 101, and the second camera 103 establishes a communication connection with the authentication server 101. It should be noted that, in the embodiment of the present application, the communication connection may be a communication connection established through a wired network or a wireless network.
Referring to fig. 2, fig. 2 is a flowchart of a method for verifying a trunk according to an embodiment of the application. The trunk authentication method is applied to the authentication server 101 in the trunk authentication system 100 shown in fig. 1. The trunk authentication method may include:
step 201, a first image including a pedestrian acquired by a first camera is acquired.
In the embodiment of the application, the first camera can be deployed at a stop entrance in a public area such as an airport, a railway station or a bus station, and when a pedestrian is positioned near the stop entrance, the first camera can shoot the pedestrian, so that the verification server can acquire a first image including the pedestrian shot by the first camera.
Step 202, if the pedestrian in the first image carries a trunk, establishing a first association relationship between the pedestrian in the first image and the trunk carried by the pedestrian.
Step 203, a second image including the pedestrian acquired by the second camera is acquired.
In the embodiment of the application, the second camera can be deployed at a departure gate in a public area such as an airport, a railway station or a bus station, and when a pedestrian is located near the departure gate, the second camera can shoot the pedestrian, so that the verification server can acquire a second image including the pedestrian shot by the second camera.
And 204, if the pedestrian in the second image carries the target trunk, generating first alarm information for indicating the trunk is misplaced when detecting that the pedestrian in the second image misplaces the trunk based on the first association relation.
In summary, in the method for verifying the luggage case provided by the embodiment of the application, when a person walks, the verification server can acquire the first image through the first camera, and if the person in the first image carries the luggage case, the verification server needs to establish the first association relationship between the person in the first image and the luggage case carried by the person. When a pedestrian goes out, the verification server can acquire a second image through the second camera, if the pedestrian in the second image carries a trunk, the verification server can determine whether the pedestrian in the second image is wrong in the trunk or not based on the first association relation, and after detecting that the pedestrian in the second image is wrong in the trunk, the verification server generates first alarm information for indicating the trunk is wrong in the trunk. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced.
Referring to fig. 3, fig. 3 is a flowchart of another method for verifying a trunk according to an embodiment of the application. The trunk authentication method is applied to the authentication server 101 in the trunk authentication system 100 shown in fig. 1. The trunk authentication method may include:
step 301, acquiring a first image including a pedestrian acquired by a first camera.
In the embodiment of the present application, the first camera may be disposed at a entrance in a public area such as an airport, a railway station, or a bus station, and may photograph a pedestrian when the pedestrian is located near the entrance. The first camera may send a first image taken by it to the authentication server. The authentication server is capable of receiving the first image such that the authentication server is capable of acquiring a first image including the pedestrian captured by the first camera.
Step 302, detecting whether a pedestrian in the first image carries a trunk.
For example, if the verification server detects that the pedestrian in the first image carries a trunk, step 303 is performed; if the verification server detects that the pedestrian in the first image does not carry a trunk, step 304 is performed.
It should be noted that, the first camera may or may not include a pedestrian in the first image acquired by the first camera. The authentication server needs to analyze the first image containing the pedestrian without analyzing the first image not containing the pedestrian. Thus, the authentication server may discard the first image that does not contain a pedestrian. It should be further noted that the first image including the pedestrian may be divided into: a first image containing both pedestrians and luggage, and a first image containing only pedestrians without luggage.
For example, assuming that the verification server detects that the first image is a first image that only includes pedestrians and does not include a trunk, the verification server may determine that the pedestrians in the first image do not carry the trunk, at which point step 304 is performed.
Assuming that the verification server detects that the first image is a first image including both a pedestrian and a trunk, the verification server needs to detect whether an overlapping area exists between an area where the pedestrian is located and an area where the trunk is located in the first image. After detecting that the overlapping area exists between the area where the pedestrian in the first image is located and the area where the luggage box is located, the verification server determines that the pedestrian in the first image carries the luggage box, and at this time, step 303 is executed; after detecting that the overlapping area does not exist between the area where the pedestrian in the first image is located and the area where the luggage case is located, the verification server determines that the pedestrian in the first image does not carry the luggage case, and at this time, step 304 is executed.
The verification server may process the first image through an object detection algorithm, and if the first image is a first image only including pedestrians and not including a trunk, a first object frame for identifying an area where the pedestrians are located is included in the processed first image; if the first image is a first image containing both pedestrians and suitcases, the processed first image is provided with a first target frame for identifying the area where the pedestrians are located and a second target frame for identifying the area where the suitcases are located. For example, referring to fig. 4, fig. 4 is an effect diagram of a first image processed by using a target detection algorithm according to an embodiment of the present application, in the first image processed by using the target detection algorithm, a first target frame 01 may be used to identify a pedestrian 001, and a second target frame 02 may be used to identify a trunk 002.
In the embodiment of the application, the verification server can detect whether the first image processed by the target detection algorithm has the second target frame or not. If the verification server detects that the first image processed by the target detection algorithm does not have the second target frame, determining that the pedestrian in the first image does not carry the luggage, and executing step 304. If the verification server detects that the first image processed by the target detection algorithm has the second target frame, the verification server needs to detect whether an overlapping area exists between the first target frame and the second target frame, if the verification server detects that the overlapping area exists between the first target frame and the second target frame, it is determined that the pedestrian in the first image carries a luggage box, and at this time, step 303 is executed; if the verification server detects that the first target frame and the second target frame do not have an overlapping area, it is determined that the pedestrian in the first image does not carry the luggage, and step 304 is performed.
It should be noted that, the target detection algorithm may be a deep learning algorithm, for example, it may be: the fast-Rcnn algorithm, the Yolo algorithm, the Mask-Rcnn algorithm, or the like.
It should be further noted that, in the first image acquired by the first camera, there may be a plurality of pedestrians, and the verification server needs to acquire a plurality of first target frames for identifying the area where each pedestrian is located in the first image by using a target detection algorithm. Because the distances between each pedestrian and the first camera may be different, the sizes of each first target frame in the first image acquired by the verification server are different, so that in order to improve the accuracy in the subsequent trunk verification, the verification server may discard the first target frames with the target frame areas smaller than the area threshold. And then only analyzing each first target frame with the target frame area being larger than or equal to the area threshold value, namely detecting whether each first target frame meeting the requirement has an overlapping area with the second target frame.
Step 303, if the pedestrian in the first image carries a trunk, establishing a first association relationship between the pedestrian in the first image and the trunk carried by the pedestrian.
In the embodiment of the application, if the pedestrian in the first image carries the luggage, the verification server may establish a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian. For example, the verification server may perform association processing on a pedestrian in the first image and a luggage carried by the pedestrian, to obtain the first association relationship. After the first association is established, the verification server can query an image of the luggage corresponding to the image of the pedestrian, that is, an image of the luggage carried by the pedestrian, based on the first association and the image of the pedestrian; the verification server may also query an image of a pedestrian corresponding to the image of the luggage case based on the first association relationship and the image of the luggage case.
Optionally, in order to be convenient for verify the server and confirm the suitcase after the wrong neck or mistake is led later, the warning information that generates can be effectual reminds the pedestrian. The verification server may associate the pedestrian in the first image with the identity information of the pedestrian in the process of establishing the first association relationship. For example, please refer to fig. 5, fig. 5 is a flowchart of a method for establishing a first association according to an embodiment of the present application. The method for establishing the first association relation can comprise the following steps:
Step 3031, face feature extraction processing is performed on the pedestrians in the first image, so as to obtain face feature values of the pedestrians in the first image.
For example, the verification server may perform face feature extraction processing on the pedestrian in the first image by using a face recognition algorithm, so as to obtain a fifth feature value of the pedestrian in the first image. The fifth feature value may be a face feature value. It should be noted that, the face recognition algorithm may be a deep convolutional neural network algorithm.
Step 3032, acquiring identity information of the pedestrian in the first image based on the face characteristic value.
In the embodiment of the application, the verification server can acquire the identity information of the pedestrian in the first image based on the face characteristic value. Optionally, the identity information may include: name information of the pedestrian, and at least one of sex information and age information of the pedestrian.
It should be noted that, there are multiple realizable ways for the verification server to obtain the identity information of the pedestrian in the first image, and the following two realizable ways are taken as examples for illustrative purposes in the embodiment of the present application:
In a first implementation, the authentication server may establish a communication connection with an identity server for recording face information and identity information corresponding to the face information. At this time, the authentication server may send the face feature value to the identity server, so that the identity server may obtain face information matched with the face feature value based on the face feature value, the identity server may send the identity information corresponding to the face information to the authentication server, and the authentication server may determine the received identity information as the identity information of the pedestrian in the first image.
In a second implementation, an automatic ticket gate is typically provided at the entrance, and a passer-by approach requires a card swipe operation on the automatic ticket gate using a document such as an identity. The automatic ticket checking machine can acquire the face information and the corresponding identity information recorded by the identity card, and send the face information and the corresponding identity information to the verification server. The authentication server may receive the face information and corresponding identity information. At this time, the verification server may acquire, based on the face feature value, face information that is sent by the automatic ticket checker and matches with the face feature value, and the verification server may determine identity information corresponding to the face information as identity information of a pedestrian in the first image.
Step 3033, the identity information of the pedestrians in the first image is associated with the identity information of the pedestrians.
In the embodiment of the application, the verification server can perform association processing on the pedestrians in the first image and the identity information of the pedestrians.
Step 3034, a first association relationship is established based on the first image and the identity information of the pedestrians in the first image after the association processing.
In the embodiment of the application, the verification server can establish the first association relationship based on the first image and the identity information of the pedestrians in the first image after the association processing.
For example, after the verification server performs association processing on the identity information of the pedestrian and the pedestrian in the first image, the verification server performs association processing on the pedestrian in the first image and a luggage carried by the pedestrian, so that a first association relationship can be obtained.
It should be noted that, the verification server may establish the first association relationship through the steps 3031 to 3034.
Optionally, after the first association relationship is established, the verification server may record the feature values related to the image of the pedestrian and the image of the luggage in the first association relationship, so as to facilitate verification of the pedestrian or the luggage carried by the pedestrian when the pedestrian leaves the station later.
For example, after establishing the first association relationship, the trunk authentication method may further include the steps of:
and A1, performing human body characteristic extraction processing on pedestrians in the first image to obtain a first characteristic value of the pedestrians in the first image.
In the embodiment of the application, the verification server can perform human body feature processing on the image of the pedestrian in the first image by adopting a human body detection algorithm to obtain the first feature value of the image of the pedestrian. The first characteristic value may be a human characteristic value. It should be noted that the human body detection algorithm may be a deep learning algorithm.
And B1, after carrying out association processing on the first characteristic value and pedestrians in the first image, recording the first characteristic value and the pedestrians in the first image in a first association relation.
In the embodiment of the application, the verification server can record the first characteristic value in the first association relationship after carrying out association processing on the first characteristic value and the pedestrian in the first image. At this time, the verification server may query a first feature value corresponding to the image of the pedestrian based on the first association relationship and the image of the pedestrian in the first image.
And C1, carrying out luggage characteristic extraction processing on the luggage carried by the pedestrian in the first image to obtain a second characteristic value of the luggage carried by the pedestrian in the first image.
In the embodiment of the application, the verification server can adopt a trunk detection algorithm to extract the trunk characteristics of the image of the trunk carried by the pedestrian in the first image, so as to obtain the second characteristic value of the image of the trunk. The second characteristic value may be a luggage characteristic value. It should be noted that the luggage detection algorithm may be a deep learning algorithm.
And D1, after carrying out association processing on the second characteristic value and the luggage carried by the pedestrian in the first image, recording the second characteristic value in the first association relation.
In the embodiment of the application, the verification server can record the second characteristic value in the first association relationship after carrying out association processing on the second characteristic value and the image of the luggage carried by the pedestrian in the first image. At this time, the verification server may query a second feature value corresponding to the image of the trunk based on the first association relationship and the image of the trunk carried by the pedestrian in the first image.
The verification server may add, through the steps A1 to D1, the following first association relationship: a feature value associated with an image of a pedestrian and an image of a trunk in a first association.
Step 304, if the pedestrian in the first image does not carry the luggage, a second association relationship between the pedestrian in the second image and the indication tag is established.
In the embodiment of the application, if the pedestrian in the first image does not carry the luggage, the verification server may establish a second association relationship between the pedestrian in the first image and the indication tag. The indication tag is used for indicating that the pedestrian in the first image does not carry the luggage. For example, the verification server may perform association processing on the pedestrian in the first image and the indication tag to obtain the second association relationship. After establishing the second association, the verification server may query an indication tag corresponding to the image of the pedestrian based on the second association and the image of the pedestrian.
Optionally, in order to be convenient for verify the server and confirm after the suitcase is wrong to get round in the follow-up, the warning information that generates can be effectual reminds the pedestrian. The verification server may associate the pedestrian in the first image with the identity information of the pedestrian in the process of establishing the second association relationship. For example, please refer to fig. 6, fig. 6 is a flowchart of a method for establishing a second association according to an embodiment of the present application. The method for establishing the second association relation can comprise the following steps:
Step 3041, performing face feature extraction processing on the pedestrians in the first image to obtain face feature values of the pedestrians in the first image.
This step may refer to step 3031 described above and will not be described in detail herein.
Step 3042, acquiring identity information of the pedestrian in the first image based on the face feature value.
This step may refer to step 3032 described above and will not be described in detail herein.
Step 3043, associating the identity information of the pedestrian and the pedestrian in the first image.
This step may refer to step 3033 described above and will not be described in detail herein.
And step 3044, establishing the second association relationship based on the first image and the identity information of the pedestrians in the first image after the association processing.
For example, after the verification server performs association processing on the pedestrian in the first image and the identity information of the pedestrian, the verification server performs association processing on the pedestrian in the first image and the indication tag, so that a second association relationship can be obtained.
It should be noted that, the verification server may establish the second association relationship through the steps 3041 to 3044.
Optionally, after the second association relationship is established, the verification server may record a feature value related to the image of the pedestrian in the second association relationship, so as to facilitate verification of the pedestrian when the pedestrian leaves the station later.
For example, after establishing the second association relationship, the trunk authentication method may further include the steps of:
and A2, performing human body feature extraction processing on pedestrians in the first image to obtain a first feature value of the pedestrians in the first image.
This step may refer to the above step A1, and will not be described herein.
And B2, after the first characteristic value is associated with the pedestrian in the first image, recording the first characteristic value in a second association relation.
This step may refer to the above step B1, and will not be described herein.
It should be noted that, through the steps A2 to B2, the verification server may add: and a characteristic value related to the image of the pedestrian in the second association relationship.
Step 305, acquiring a second image including the pedestrian acquired by the second camera.
In an embodiment of the present application, the second camera may be disposed at a departure gate in a public area such as an airport, a railway station, or a bus station, and may photograph a pedestrian when the pedestrian is located near the departure gate. The second camera may send a second image taken by it to the authentication server. The authentication server is capable of receiving the second image such that the authentication server is capable of acquiring a second image including the pedestrian captured by the second camera.
Step 306, detect if the pedestrian in the second image is carrying a trunk.
For example, if the verification server detects that the pedestrian in the second image carries a trunk, step 307 is performed; if the verification server detects that the pedestrian in the second image does not carry the target trunk, step 310 is performed.
It should be noted that, in the step 306, the manner of detecting whether the pedestrian in the second image carries the luggage may refer to the manner of detecting whether the pedestrian in the first image carries the luggage in the step 302, which is not described herein.
Step 307, if the pedestrian in the second image carries a trunk, detecting whether the pedestrian in the second image is misled about the trunk based on the first association relationship or the second association relationship.
In the embodiment of the application, if the pedestrian in the second image carries the luggage, the verification server needs to detect whether the pedestrian in the second image is misled in the luggage based on the first association relationship or the second association relationship.
It should be noted that, if the pedestrian in the second image carries a trunk, there are the following two cases: in the first case, the pedestrian is also carrying a trunk when standing; in the second case, the pedestrian is standing without carrying the luggage. Both of these situations have the situation that the pedestrian in the second image miscollars the trunk.
For the first case described above, the verification server may detect whether the pedestrian in the second image is misdirected to the trunk based on the first association relationship. For example, when the verification server detects that the pedestrian in the second image is misdirected to the trunk based on the first association, step 308 is performed; and when the verification server detects that the pedestrian in the second image is not to error the trunk based on the first association relation, ending the action.
In the embodiment of the application, the verification server detects whether the pedestrian in the second image is misleading the luggage in a plurality of realizable modes based on the first association relation, and the embodiment of the application is schematically illustrated by taking the following two realizable modes as examples:
In a first implementation, the detecting, by the verification server, whether the pedestrian in the second image is misdirected to the luggage based on the first association may include the steps of:
and A3, when the alternative pedestrian matched with the pedestrian in the second image is acquired in the first association relation, determining an alternative luggage box corresponding to the alternative pedestrian based on the first association relation.
In the embodiment of the application, when the verification server acquires the candidate pedestrian matched with the pedestrian in the second image in the first association relationship, the verification server can determine the candidate luggage corresponding to the candidate pedestrian based on the first association relationship. The pedestrian in the second image is the same person as the candidate pedestrian.
Since the characteristic value related to the image of the pedestrian is recorded in the first association relationship, an alternative pedestrian matching the pedestrian in the second image can be determined in the first association relationship by comparing the characteristic values. For example, acquiring the candidate pedestrian matching the pedestrian in the second image in the first association relationship may include the steps of:
and step A31, performing human body characteristic extraction processing on the pedestrians in the second image to obtain a third characteristic value of the pedestrians in the second image.
For example, the verification server may perform human body feature extraction processing on the pedestrian in the second image by using a human body detection algorithm, to obtain a third feature value of the pedestrian in the second image. The third characteristic value is a human characteristic value. It should be noted that the human body detection algorithm may be a deep learning algorithm.
And step A32, determining the similarity between the third characteristic value and each first characteristic value recorded in the first association relation.
In the embodiment of the application, the verification server can determine the similarity between the third characteristic value and each first characteristic value recorded in the first association relation.
And step A33, determining the pedestrians corresponding to the first characteristic values with the similarity meeting the requirements as alternative pedestrians.
In the embodiment of the application, the verification server can determine the pedestrian corresponding to the first characteristic value with the similarity meeting the requirement in the corresponding relation as the candidate pedestrian.
For example, after determining the similarity between the third feature value and each first feature value recorded in the corresponding relationship, the verification server may sort all the determined similarities in the order from large to small, to obtain a sequence of similarities. At this time, the verification server may extract the similarity of the previous target number in the sequence of the similarities, obtain at least one similarity, and determine a first feature value in the at least one similarity, where the similarity is greater than a similarity threshold, as a first feature value that meets a requirement. For example, the target number may be 3, and the verification server may extract the first 3 similarities in the sequence of similarities, and determine a first feature value, of the 3 similarities, having a similarity greater than a similarity threshold, as a first feature value that meets the requirement.
It should be noted that, through the above steps a31 to a33, the verification server may obtain, in the first association relationship, an alternative pedestrian matching the pedestrian in the second image.
In the embodiment of the application, after the verification server acquires the alternative pedestrian, the verification server can acquire the alternative luggage corresponding to the alternative pedestrian based on the first association relation.
And B3, determining whether the pedestrian in the second image misties the luggage by detecting whether the luggage carried by the pedestrian in the second image is matched with the alternative luggage.
In an embodiment of the application, the verification server may determine whether the pedestrian in the second image is misdirected from the trunk by detecting whether the trunk carried by the pedestrian in the second image matches the alternative trunk. For example, if the verification server detects that the luggage carried by the pedestrian in the second image does not match the candidate luggage, it determines that the pedestrian in the second image is misdirected to the luggage, and step 308 is performed; if the verification server detects that the luggage case carried by the pedestrian in the second image is matched with the alternative luggage case, the fact that the pedestrian in the second image does not lead the luggage case to be wrong is determined, and the action is ended.
Since the characteristic value related to the image of the trunk is recorded in the first association relationship, it is possible to determine whether the trunk carried by the pedestrian in the second image matches with the alternative trunk by comparing the characteristic values. For example, detecting whether the luggage case carried by the pedestrian in the second image matches the candidate luggage case may include the steps of:
and step B31, carrying out luggage characteristic extraction processing on the luggage carried by the pedestrian in the second image to obtain a fourth characteristic value of the luggage.
For example, the verification server may perform a luggage feature extraction process on an image of a luggage carried by the pedestrian in the second image by using a luggage detection algorithm, to obtain a fourth feature value of the image of the luggage. The fourth characteristic value may be a luggage characteristic value.
And step B32, detecting whether the similarity between the fourth characteristic value and the second characteristic value of the alternative luggage case is larger than a similarity threshold value.
In the embodiment of the application, the verification server can firstly inquire the second characteristic value corresponding to the alternative luggage in the first association relation; then, the verification server can determine the similarity between the fourth characteristic value and the second characteristic value of the alternative luggage; finally, the verification server may detect whether the similarity is greater than a similarity threshold.
For example, if the verification server detects that the similarity between the fourth feature value and the second feature value of the candidate trunk is not greater than the similarity threshold, it is determined that the trunk is not matched with the candidate trunk, and then it may be determined that the pedestrian in the second image miscaptures the trunk; if the verification server detects that the similarity between the fourth characteristic value and the second characteristic value of the alternative luggage case is larger than the similarity threshold value, the luggage case is determined to be matched with the alternative luggage case, and then it can be determined that the pedestrian in the second image does not mislead the luggage case.
In a second implementation manner, the verification server detecting whether the pedestrian in the second image is misleading the luggage based on the first association relationship may include the following steps:
And A4, when the alternative luggage case matched with the luggage case carried by the pedestrian in the second image is acquired in the first association relation, determining the alternative pedestrian corresponding to the alternative luggage case based on the first association relation.
In the embodiment of the application, when the verification server acquires the candidate luggage case matched with the luggage case carried by the pedestrian in the second image in the first association relationship, the verification server can determine the candidate pedestrian corresponding to the candidate luggage case based on the first association relationship. The luggage case carried by the pedestrian in the second image is the same luggage case as the alternative luggage case.
Since the characteristic value related to the image of the trunk is recorded in the first association relationship, an alternative trunk matching the trunk carried by the pedestrian in the second image can be determined in the first association relationship by comparing the characteristic values. For example, acquiring an alternative trunk matching a trunk carried by a pedestrian in the second image in the first association relationship may include the steps of:
and step A41, carrying out luggage characteristic extraction processing on the luggage carried by the pedestrian in the second image to obtain a fourth characteristic value of the luggage.
This step may refer to the above step B31, and will not be described herein.
And step A42, determining the similarity between the fourth characteristic value and each second characteristic value recorded in the first association relation.
This step may refer to the above step a32, and will not be described herein.
And step A43, determining the trunk corresponding to the second characteristic value with the similarity meeting the requirement as an alternative trunk.
This step may refer to the above step a33, and will not be described herein.
It should be noted that, through the above steps a41 to a43, the verification server may obtain, in the first association relationship, an alternative trunk matching with a trunk carried by a pedestrian in the second image.
In the embodiment of the application, after the verification server acquires the alternative luggage, the verification server can acquire the alternative pedestrian corresponding to the alternative luggage based on the first association relation.
And B4, determining whether the pedestrian in the second image miscatches the trunk by detecting whether the pedestrian in the second image is matched with the candidate pedestrian.
In the embodiment of the application, the verification service can determine whether the pedestrian in the second image is misdirected the trunk by detecting whether the pedestrian in the second image is matched with the candidate pedestrian. For example, if the verification server detects that the pedestrian in the second image does not match the candidate pedestrian, determining that the pedestrian in the second image is misdirected to the trunk, and executing step 308; if the verification server detects that the pedestrian in the second image is matched with the candidate pedestrian, determining that the pedestrian in the second image does not lead the trunk to be wrong, and ending the action.
Since the characteristic values related to the images of the pedestrians are recorded in the first association relationship, whether the pedestrians in the second image are matched with the alternative pedestrians can be determined through comparison of the characteristic values. For example, detecting whether a pedestrian in the second image matches an alternative pedestrian may include the steps of:
and step B41, performing human body characteristic extraction processing on the pedestrians in the second image to obtain third characteristic values of the pedestrians in the second image.
This step may refer to the above step a32, and will not be described herein.
And step B42, detecting whether the similarity between the third characteristic value and the first characteristic value of the candidate pedestrian is greater than a similarity threshold value.
In the embodiment of the application, the verification server can firstly inquire a first characteristic value corresponding to the alternative pedestrian in the first association relation; then, the verification server can determine the similarity between the third characteristic value and the first characteristic value of the candidate pedestrian; finally, the verification server may detect whether the similarity is greater than a similarity threshold.
For example, if the verification server detects that the similarity between the third feature value and the first feature value of the candidate pedestrian is not greater than the similarity threshold, it is determined that the pedestrian is not matched with the candidate pedestrian, and then it can be determined that the pedestrian in the second image miscatches the luggage case; if the verification server detects that the similarity between the third characteristic value and the first characteristic value of the candidate pedestrian is larger than the similarity threshold, the pedestrian is determined to be matched with the candidate pedestrian, and then it can be determined that the pedestrian in the second image does not miss the trunk.
In the first case, with the first and second realizations described above, the verification server may detect whether the pedestrian in the second image is misdirected the trunk based on the first association relationship.
For the second case described above, the verification server may detect whether the pedestrian in the second image is misdirected to the trunk based on the second association relationship. For example, when the verification server detects that the pedestrian in the second image is misdirected to the luggage case based on the second association, step 309 is performed. In the second case, the pedestrian in the second image is not carrying the luggage while the pedestrian is standing, and in this case, there is no case where the pedestrian in the second image does not miscatch the luggage.
In the embodiment of the present application, since the association relationship between the image of the pedestrian and the indication tag is recorded in the second relationship, when the verification server obtains the candidate pedestrian matching the pedestrian in the second image in the second association relationship, the verification server may determine that the pedestrian in the second image is misleading the trunk, and at this time, step 309 is executed.
It should be noted that, the method of the verification server obtaining the candidate pedestrian matching the pedestrian in the second image in the second association relationship may refer to the method of obtaining the candidate pedestrian matching the pedestrian in the second image in the first association relationship in step A3 in the first case, which is not described herein.
Step 308, when the pedestrian in the second image is detected to be in error in the trunk based on the first association relation, generating first alarm information for indicating the error in the trunk.
In the embodiment of the application, when the verification server detects that the pedestrian in the second image is misled in the trunk based on the first association relation, the verification server can generate the first alarm information for indicating that the trunk is misled.
Optionally, the first alarm information may be voice information, the verification server may send the first alarm information to an audio playing device, such as a sound, set at the exit, where the audio playing device may play the first alarm information after receiving the first alarm information sent by the verification server, so that a pedestrian with a wrong luggage case may be reminded.
In the embodiment of the application, the identity information corresponding to the image of the pedestrian is also recorded in the first association relationship, so that the first alarm information can be generated based on the identity information when the first alarm information is generated, and the first alarm information can be used for effectively reminding the pedestrian.
For example, the verification server generating the first report information for indicating the trunk error may include: in the first association relation, acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image; and generating first alarm information based on the identity information of the candidate pedestrian. For example, assume that the identity information of the candidate pedestrian includes name information of: "Zhang San", the first alarm information generated by the verification server may be: "Zhang san, is your trunk taken wrong? ".
Step 309, when detecting that the pedestrian in the second image is misdirected to the trunk based on the second association relationship, generating third alarm information for indicating that the trunk is misdirected.
In the embodiment of the application, when the verification server detects that the pedestrian in the second image is misled in the trunk based on the second association relationship, the verification server can generate the third alarm information for indicating that the trunk is misled.
Alternatively, the first alarm information may be voice information. For example, the verification server generating third report information for indicating a trunk error may include: in the first association relation, acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image; and generating third alarm information based on the identity information of the candidate pedestrian. For example, assume that the identity information of the candidate pedestrian includes name information of: "Lifour", the first alarm information generated by the authentication server may be: "Lifour, you do not carry the trunk, you get the trunk of others.
It should be noted that, in other possible implementations, the first alarm information or the third alarm information generated by the verification server and used to indicate the error of the trunk may be the same, for example, both alarm tones.
Step 310, if the pedestrian in the second image does not carry the luggage, detecting whether the pedestrian in the second image missed the luggage based on the first association relationship.
In the embodiment of the application, if the pedestrian in the second image does not carry the luggage, whether the pedestrian in the second image missed the luggage is detected based on the first association relation.
It should be noted that, if the pedestrian in the second image does not carry the luggage, there are the following two cases: in the first case, the pedestrian carries a trunk when entering the station; in the second case, the pedestrian is standing without carrying the luggage. Only in the first case, there is a case where the pedestrian in the second image missed the trunk, while in the second case, the pedestrian in the second image did not miss the trunk.
Therefore, the authentication server needs to detect whether or not the pedestrian in the second image missed the trunk based on the first association relationship. When the verification service detects that the pedestrian in the second image missed the trunk based on the first association relationship, step 311 is executed. And when the verification service detects that the pedestrian in the second image does not miss the trunk based on the first association relation, ending the action.
In the embodiment of the present application, since the association relationship between the image of the pedestrian and the image of the luggage carried by the pedestrian is recorded in the first relationship, when the verification server obtains the candidate pedestrian matching the pedestrian in the second image in the first association relationship, the verification server may determine that the pedestrian in the second image missed the luggage, and at this time, step 311 is executed. When the verification server does not acquire the candidate pedestrian matched with the pedestrian in the second image in the first association relationship, the verification server can acquire the candidate pedestrian matched with the pedestrian in the second image in the second association relationship, and the verification server can determine that the pedestrian in the second image does not miss the trunk and end the action.
It should be noted that, the method of the verification server obtaining the candidate pedestrian matching the pedestrian in the second image in the first association relationship may refer to the method of obtaining the candidate pedestrian matching the pedestrian in the second image in the first association relationship in step A3 in step 307, which is not described herein.
And 311, when the pedestrian in the second image is detected to miss the trunk based on the first association relation, generating second alarm information for indicating the trunk to miss the trunk.
In the embodiment of the application, when the verification server detects that the pedestrian in the second image leaks the trunk based on the first association relation, the verification server can generate the second alarm information for indicating that the trunk leaks the trunk.
Alternatively, the second alarm information may be voice information. In the embodiment of the application, the identity information corresponding to the image of the pedestrian is also recorded in the second association relationship, so that the second alarm information can be generated based on the identity information when the second alarm information is generated, thereby effectively reminding the pedestrian of the second alarm information. For example, the verification server generating the second message indicating the misleading of the trunk may include: acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation; and generating second alarm information based on the identity information of the candidate pedestrian. For example, assume that the identity information of the candidate pedestrian includes name information of: "wang five", the second alarm information generated by the verification server may be: "Wangwu, whether your trunk was forgotten to be taken? ".
It should be noted that, in other possible implementations, the first alarm information generated by the verification server and used for indicating that the trunk is misled and the second alarm information used for indicating that the trunk is missed may be the same, for example, all alarm audio.
It should be further noted that, the sequence of the steps of the trunk verification method provided by the embodiment of the present application may be appropriately adjusted, the steps may also be increased or decreased according to the situation, and any method that is easily conceivable to be changed by those skilled in the art within the technical scope of the present disclosure should be covered within the protection scope of the present disclosure, so that no further description is provided.
In summary, in the method for verifying the luggage case provided by the embodiment of the application, when a person walks, the verification server can acquire the first image through the first camera, and if the person in the first image carries the luggage case, the verification server needs to establish the first association relationship between the person in the first image and the luggage case carried by the person. When a pedestrian goes out, the verification server can acquire a second image through the second camera, if the pedestrian in the second image carries a trunk, the verification server can determine whether the pedestrian in the second image is wrong in the trunk or not based on the first association relation, and after detecting that the pedestrian in the second image is wrong in the trunk, the verification server generates first alarm information for indicating the trunk is wrong in the trunk. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced. And the verification server can also generate second alarm information for indicating the missing collar of the luggage case when the pedestrian in the second image is missing from the luggage case, so that the probability of missing the collar of the luggage case is effectively reduced.
Referring to fig. 7, fig. 7 is a block diagram of a trunk authentication device according to an embodiment of the present application. The luggage authentication device 400 may be integrated in the authentication server 101 in the luggage authentication system 100 shown in fig. 1. The trunk authentication device 400 may include:
The first acquisition module 401 is configured to acquire a first image including a pedestrian acquired by a first camera.
The first establishing module 402 is configured to establish a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian if the pedestrian in the first image carries the luggage.
A second acquiring module 403, configured to acquire a second image including the pedestrian acquired by the second camera.
The first generating module 404 is configured to generate first alarm information for indicating that the luggage is misled when detecting that the pedestrian in the second image is misled based on the first association relationship if the pedestrian in the second image carries the luggage.
In summary, in the verification device for a trunk provided by the embodiment of the application, when a person walks, the verification server can acquire the first image through the first camera, and if the person in the first image carries the trunk, the verification server needs to establish a first association relationship between the person in the first image and the trunk carried by the person. When a pedestrian goes out, the verification server can acquire a second image through the second camera, if the pedestrian in the second image carries a trunk, the verification server can determine whether the pedestrian in the second image is wrong in the trunk or not based on the first association relation, and after detecting that the pedestrian in the second image is wrong in the trunk, the verification server generates first alarm information for indicating the trunk is wrong in the trunk. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced.
Optionally, the luggage verification device may further include:
And the first determining module is used for determining the alternative luggage corresponding to the alternative pedestrian based on the first association relation when the alternative pedestrian matched with the pedestrian in the second image is acquired in the first association relation if the pedestrian in the second image carries the luggage.
And the first detection module is used for detecting whether the luggage case carried by the pedestrian in the second image is matched with the alternative luggage case.
And the second determining module is used for determining that the pedestrian in the second image is misled in the trunk if the trunk carried by the pedestrian in the second image is detected to be not matched with the alternative trunk.
Optionally, the luggage verification device may further include:
and the third determining module is used for determining an alternative pedestrian corresponding to the alternative luggage case based on the first association relation when the alternative luggage case matched with the luggage case carried by the pedestrian in the second image is acquired in the first association relation.
And the second detection module is used for detecting whether the pedestrian in the second image is matched with the alternative pedestrian.
And the fourth determining module is used for determining that the pedestrian in the second image misleads the luggage case if the pedestrian in the second image is detected to be not matched with the alternative pedestrian.
Optionally, referring to fig. 8, fig. 8 is a block diagram of another luggage verification device according to an embodiment of the present application. The luggage verification device 400 may further include:
and the second generating module 405 is configured to generate second alarm information for indicating that the pedestrian in the second image missed the collar of the trunk if the pedestrian in the second image does not carry the trunk, based on the first association relationship.
Optionally, the luggage verification device may further include:
And a fifth determining module, configured to determine that the pedestrian in the second image missed the trunk when the candidate pedestrian matching the pedestrian in the second image is obtained in the first association relationship if the pedestrian in the second image does not carry the trunk.
Optionally, as shown in fig. 8, the trunk authentication device 400 may further include:
the second establishing module 406 is configured to establish a second association between the pedestrian in the first image and the indication tag if the pedestrian in the first image does not carry the luggage, where the indication tag is used to indicate that the pedestrian in the first image does not carry the luggage.
And a third generating module 407, configured to generate third alarm information for indicating that the luggage is misled when detecting that the pedestrian in the second image is misled in the luggage based on the second association relationship if the pedestrian in the second image carries the luggage.
Optionally, the luggage verification device may further include:
and a sixth determining module, configured to determine that the pedestrian in the second image is misled in the trunk if the pedestrian in the second image carries the trunk, and when an alternative pedestrian matching the pedestrian in the second image is obtained in the second association relationship.
Optionally, the first establishing module 402 is configured to: extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image; acquiring identity information of pedestrians in the first image based on the face characteristic values; carrying out association processing on the identity information of pedestrians in the first image; and establishing a first association relation based on the first image and identity information of pedestrians in the first image after the association processing.
Optionally, the first generating module 404 is configured to: in the first association relation, acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image; and generating first alarm information based on the identity information of the candidate pedestrian. A second generating module 405, configured to: in the first association relation, acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image; and generating second alarm information based on the identity information of the candidate pedestrian.
Optionally, a second establishing module 406 is configured to: extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image; acquiring identity information of pedestrians in the first image based on the face characteristic values; carrying out association processing on the identity information of pedestrians in the first image; and establishing a second association relation based on the first image and the identity information of the pedestrians in the first image after the association processing.
Optionally, a third generating module 407 is configured to: acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation; and generating third alarm information based on the identity information of the candidate pedestrian.
In summary, in the method for verifying the luggage case provided by the embodiment of the application, when a person walks, the verification server can acquire the first image through the first camera, and if the person in the first image carries the luggage case, the verification server needs to establish the first association relationship between the person in the first image and the luggage case carried by the person. When a pedestrian goes out, the verification server can acquire a second image through the second camera, if the pedestrian in the second image carries a trunk, the verification server can determine whether the pedestrian in the second image is wrong in the trunk or not based on the first association relation, and after detecting that the pedestrian in the second image is wrong in the trunk, the verification server generates first alarm information for indicating the trunk is wrong in the trunk. The luggage case carried by the pedestrian is verified in a manual verification mode, so that the probability of error in the luggage case is effectively reduced. And the verification server can also generate second alarm information for indicating the missing collar of the luggage case when the pedestrian in the second image is missing from the luggage case, so that the probability of missing the collar of the luggage case is effectively reduced.
The embodiment of the application also provides a luggage verification system, and the structure of the luggage verification system can be referred to as fig. 1. The luggage verification system 100 includes: an authentication server 101, a first camera 102, and a second camera 103. The trunk authentication device 400 shown in fig. 7 or 8 may be integrated on the authentication server 101.
By way of example, the authentication server, the first camera and the second camera in the trunk authentication system function as follows:
The first camera is used for acquiring a first image containing pedestrians and sending the first image to the verification server.
The verification server is used for acquiring a first image sent by the first camera.
The verification server is used for establishing a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian if the pedestrian in the first image carries the luggage.
The second camera is used for collecting a second image containing pedestrians and sending the second image to the verification server.
The verification server is used for acquiring a second image sent by the second camera.
The verification server is used for generating first alarm information for indicating that the luggage is misled when detecting that the pedestrian in the second image is misled on the luggage based on the first association relation if the pedestrian in the second image carries the luggage.
It will be clearly understood by those skilled in the art that, for convenience and brevity of description, specific working procedures of the above-described system, apparatus and module may refer to corresponding procedures in the foregoing method embodiments, which are not repeated herein.
Embodiments of the present application also provide a computer device that may be the authentication server 101 in the luggage authentication system 100 shown in fig. 1. The computer device includes: at least one processor; and at least one memory;
Wherein the at least one memory stores one or more programs;
at least one processor configured to execute programs stored on the at least one memory to implement the luggage verification method illustrated in fig. 2 or 3. For example, the method may include:
Acquiring a first image which is acquired by a first camera and contains pedestrians; if the pedestrian in the first image carries the luggage, establishing a first association relationship between the pedestrian in the first image and the luggage carried by the pedestrian; acquiring a second image containing pedestrians acquired by a second camera; if the pedestrian in the second image carries the luggage, when the pedestrian in the second image is detected to be in error in taking the luggage based on the first association relation, first alarm information for indicating the error in taking the luggage is generated.
Embodiments of the present application also provide a computer-readable storage medium that is a non-volatile storage medium having at least one instruction stored therein that is loaded and executed by a processor to implement the luggage authentication method shown in fig. 2 or 3.
In the present disclosure, the terms "first" and "second" are used for descriptive purposes only and are not to be construed as indicating or implying relative importance. The term "plurality" refers to two or more, unless explicitly defined otherwise.
It will be understood by those skilled in the art that all or part of the steps for implementing the above embodiments may be implemented by hardware, or may be implemented by a program for instructing relevant hardware, where the program may be stored in a computer readable storage medium, and the storage medium may be a read-only memory, a magnetic disk or an optical disk, etc.
The foregoing description of the preferred embodiments of the present application is not intended to limit the application, but is intended to cover all modifications, equivalents, alternatives, and improvements falling within the spirit and principles of the application.

Claims (14)

1. A trunk authentication method, the method being applied to an authentication server, the method comprising:
acquiring a first image containing pedestrians, which is acquired by a first camera arranged at a station entrance;
Determining a first target frame with the target frame area being greater than or equal to an area threshold value in the plurality of first target frames under the condition that the first image is detected to have the plurality of first target frames for identifying the area where the pedestrian is located by a target detection algorithm; for any determined first target frame, if a second target frame for identifying the area where the baggage is located is detected in the first image through a target detection algorithm, and an overlapping area exists between the second target frame and any first target frame, determining that the baggage in the second target frame is carried by the pedestrian in any first target frame; if the first image does not have a second target frame with an overlapping area with the first target frame, determining that the pedestrian in any first target frame does not carry luggage;
If the pedestrian in the first image carries a luggage case, establishing a first association relationship between the pedestrian in the first image and the luggage case carried by the pedestrian, wherein the first association relationship comprises an association relationship between a characteristic value of an image of a first target frame corresponding to the pedestrian in the first image and an image of a second target frame corresponding to the luggage case carried by the pedestrian in the first image;
If the pedestrian in the first image does not carry the luggage, a second association relationship between the pedestrian in the first image and an indication tag is established, wherein the indication tag is used for indicating that the pedestrian in the first image does not carry the luggage, and the second association relationship comprises association relationship between the characteristic value of the image of the first target frame corresponding to the pedestrian in the first image and the indication tag;
Acquiring a second image containing pedestrians, which is acquired by a second camera arranged at a stop exit;
If the pedestrian in the second image carries a trunk, generating first alarm information for indicating the trunk is in error when detecting that the pedestrian in the second image is in error based on the characteristic value of the image of the pedestrian in the second image or the characteristic value of the image of the trunk in the second image and the first association relation; or alternatively
If the pedestrian in the second image carries a trunk, when detecting that the pedestrian in the second image is misled in the trunk based on the characteristic value of the image of the pedestrian in the second image and the second association relation, generating third alarm information for indicating misled trunk;
After acquiring a second image including a pedestrian acquired by a second camera disposed at the exit, if the pedestrian in the second image carries a trunk, the method further includes: when the candidate luggage box matched with the luggage box carried by the pedestrian in the second image is acquired in the first association relation, determining the candidate pedestrian corresponding to the candidate luggage box based on the first association relation; detecting whether a pedestrian in the second image matches the candidate pedestrian; if the pedestrian in the second image is detected to be not matched with the alternative pedestrian, determining that the pedestrian in the second image misleads the trunk;
After acquiring a second image including a pedestrian acquired by a second camera disposed at the exit, if the pedestrian in the second image carries a trunk, the method further includes: when an alternative pedestrian matched with the pedestrian in the second image is acquired in the second association relation, determining that the pedestrian in the second image misties the trunk;
The method for establishing the second association relationship between the pedestrians and the indication labels in the first image comprises the following steps: extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image; acquiring identity information of pedestrians in the first image based on the face characteristic values; carrying out association processing on pedestrians in the first image and identity information of the pedestrians; establishing a second association relationship based on the first image and identity information of pedestrians in the first image after association processing;
Wherein the generating the third alarm information for indicating the error of the trunk comprises: acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation; and generating the third alarm information based on the identity information of the candidate pedestrian.
2. The method of claim 1, wherein after acquiring the second image including the pedestrian acquired by the second camera deployed at the exit, if the pedestrian in the second image carries a trunk, the method further comprises:
when an alternative pedestrian matched with the pedestrian in the second image is acquired in the first association relationship, determining an alternative luggage box corresponding to the alternative pedestrian based on the first association relationship;
Detecting whether a luggage case carried by a pedestrian in the second image matches the alternative luggage case;
And if the luggage case carried by the pedestrian in the second image is detected to be not matched with the alternative luggage case, determining that the pedestrian in the second image misleads the luggage case.
3. The method of claim 1, wherein after acquiring the second image comprising the pedestrian acquired by the second camera deployed at the exit, the method further comprises:
And if the pedestrian in the second image does not carry the luggage, generating second alarm information for indicating the missing collar of the luggage when detecting that the pedestrian in the second image missing the collar of the luggage based on the first association relation.
4. The method of claim 3, wherein after acquiring the second image including the pedestrian acquired by the second camera deployed at the exit, if the pedestrian in the second image is not carrying a trunk, the method further comprises:
And when the candidate pedestrian matched with the pedestrian in the second image is acquired in the first association relation, determining that the pedestrian in the second image leaks from the trunk.
5. A method according to claim 3, wherein establishing a first association of a pedestrian in the first image with a luggage case carried by the pedestrian comprises:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
and establishing the first association relation based on the first image and identity information of pedestrians in the first image after association processing.
6. The method of claim 5, wherein generating the first alert information indicating the misuse of the luggage case comprises:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
generating the first alarm information based on the identity information of the candidate pedestrian;
The generating the second alarm information for indicating the missing collar of the luggage case comprises the following steps:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
and generating the second alarm information based on the identity information of the candidate pedestrian.
7. A luggage verification device, the device being applied to a verification server, the device comprising:
The first acquisition module is used for acquiring a first image containing pedestrians acquired by a first camera arranged at the entrance; determining a first target frame with the target frame area being greater than or equal to an area threshold value in the plurality of first target frames under the condition that the first image is detected to have the plurality of first target frames for identifying the area where the pedestrian is located by a target detection algorithm; for any determined first target frame, if a second target frame for identifying the area where the baggage is located is detected in the first image through a target detection algorithm, and an overlapping area exists between the second target frame and any first target frame, determining that the baggage in the second target frame is carried by the pedestrian in any first target frame; if the first image does not have a second target frame with an overlapping area with the first target frame, determining that the pedestrian in any first target frame does not carry luggage;
The first establishing module is used for establishing a first association relation between the pedestrian in the first image and the luggage case carried by the pedestrian if the luggage case is carried by the pedestrian in the first image, wherein the first association relation comprises an association relation between a characteristic value of an image of a first target frame corresponding to the pedestrian in the first image and an image of a second target frame corresponding to the luggage case carried by the pedestrian in the first image;
The second establishing module is used for establishing a second association relation between the pedestrian in the first image and the indication label if the pedestrian in the first image does not carry the luggage, wherein the indication label is used for indicating that the pedestrian in the first image does not carry the luggage, and the second association relation comprises an association relation between the characteristic value of the image of the first target frame corresponding to the pedestrian in the first image and the indication label;
The second acquisition module is used for acquiring a second image containing pedestrians acquired by a second camera arranged at the station exit;
The first generation module is used for generating first alarm information for indicating that the luggage is misled when detecting that the luggage is misled by the pedestrian in the second image based on the characteristic value of the image of the pedestrian in the second image or the characteristic value of the image of the luggage in the second image and the first association relation if the pedestrian in the second image carries the luggage;
The third generation module is used for generating third alarm information for indicating that the luggage is in error when detecting that the luggage is in error by the pedestrians in the second image based on the characteristic values of the images of the pedestrians in the second image and the second association relation if the pedestrians in the second image carry the luggage;
Wherein the apparatus further comprises: the third determining module is used for determining an alternative pedestrian corresponding to the alternative luggage case based on the first association relation when the alternative luggage case matched with the luggage case carried by the pedestrian in the second image is acquired in the first association relation if the pedestrian in the second image carries the luggage case; the second detection module is used for detecting whether the pedestrian in the second image is matched with the alternative pedestrian or not; a fourth determining module, configured to determine that a pedestrian in the second image is misdirected from the trunk if it is detected that the pedestrian in the second image does not match the candidate pedestrian;
Wherein the apparatus further comprises: a sixth determining module, configured to determine that, if the pedestrian in the second image carries a trunk, the pedestrian in the second image miscaptures the trunk when an alternative pedestrian matching the pedestrian in the second image is obtained in the second association relationship;
Wherein the second establishing module is configured to: extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image; acquiring identity information of pedestrians in the first image based on the face characteristic values; carrying out association processing on pedestrians in the first image and identity information of the pedestrians; establishing a second association relationship based on the first image and identity information of pedestrians in the first image after association processing;
Wherein, the third generating module is configured to: acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation; and generating the third alarm information based on the identity information of the candidate pedestrian.
8. The apparatus of claim 7, wherein the apparatus further comprises:
The first determining module is used for determining an alternative luggage box corresponding to the alternative pedestrian based on the first association relation when the alternative pedestrian matched with the pedestrian in the second image is obtained in the first association relation if the pedestrian in the second image carries the luggage box;
a first detection module for detecting whether a luggage case carried by a pedestrian in the second image matches the alternative luggage case;
And the second determining module is used for determining that the pedestrian in the second image misties the trunk if the trunk carried by the pedestrian in the second image is detected to be not matched with the alternative trunk.
9. The apparatus of claim 7, wherein the apparatus further comprises:
and the second generation module is used for generating second alarm information for indicating the missing collar of the luggage case when detecting that the pedestrian in the second image missing the luggage case based on the first association relation if the pedestrian in the second image does not carry the luggage case.
10. The apparatus of claim 9, wherein the apparatus further comprises:
and a fifth determining module, configured to determine that, if the pedestrian in the second image does not carry the luggage, the pedestrian in the second image missed the luggage when an alternative pedestrian matching the pedestrian in the second image is obtained in the first association relationship.
11. The apparatus of claim 9, wherein the first establishing module is configured to:
extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image;
acquiring identity information of pedestrians in the first image based on the face characteristic values;
carrying out association processing on pedestrians in the first image and identity information of the pedestrians;
and establishing the first association relation based on the first image and identity information of pedestrians in the first image after association processing.
12. The apparatus of claim 11, wherein the first generation module is configured to:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
generating the first alarm information based on the identity information of the candidate pedestrian;
The second generating module is configured to:
acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the first association relation;
and generating the second alarm information based on the identity information of the candidate pedestrian.
13. A luggage detection system, comprising: the system comprises a first camera arranged at a station entrance, a second camera arranged at a station exit and a verification server;
the first camera is used for collecting a first image containing pedestrians and sending the first image to the verification server;
the verification server is used for acquiring a first image sent by the first camera;
The verification server is used for:
Determining a first target frame with the target frame area being greater than or equal to an area threshold value in the plurality of first target frames under the condition that the first image is detected to have the plurality of first target frames for identifying the area where the pedestrian is located by a target detection algorithm; for any determined first target frame, if a second target frame for identifying the area where the baggage is located is detected in the first image through a target detection algorithm, and an overlapping area exists between the second target frame and any first target frame, determining that the baggage in the second target frame is carried by the pedestrian in any first target frame; if the first image does not have a second target frame with an overlapping area with the first target frame, determining that the pedestrian in any first target frame does not carry luggage;
If the pedestrian in the first image carries a luggage case, establishing a first association relationship between the pedestrian in the first image and the luggage case carried by the pedestrian, wherein the first association relationship comprises an association relationship between a characteristic value of an image of a first target frame corresponding to the pedestrian in the first image and an image of a second target frame corresponding to the luggage case carried by the pedestrian in the first image; if the pedestrian in the first image does not carry the luggage, a second association relationship between the pedestrian in the first image and an indication tag is established, wherein the indication tag is used for indicating that the pedestrian in the first image does not carry the luggage, and the second association relationship comprises association relationship between the characteristic value of the image of the first target frame corresponding to the pedestrian in the first image and the indication tag;
The second camera is used for collecting a second image containing pedestrians and sending the second image to the verification server;
The verification server is used for acquiring a second image sent by the second camera;
The verification server is configured to generate first alarm information for indicating that the luggage is misled when the luggage is detected to be misled based on the feature value of the image of the pedestrian in the second image or the feature value of the image of the luggage in the second image and the first association relation, or generate third alarm information for indicating that the luggage is misled when the luggage is detected to be misled based on the feature value of the image of the pedestrian in the second image and the second association relation, if the pedestrian in the second image carries the luggage;
Wherein, the verification server is further configured to: if the pedestrian in the second image carries a trunk, when an alternative trunk matched with the trunk carried by the pedestrian in the second image is acquired in the first association relationship, determining an alternative pedestrian corresponding to the alternative trunk based on the first association relationship; detecting whether a pedestrian in the second image matches the candidate pedestrian; if the pedestrian in the second image is detected to be not matched with the alternative pedestrian, determining that the pedestrian in the second image misleads the trunk;
Wherein, the verification server is further configured to: if the pedestrian in the second image carries a trunk, when an alternative pedestrian matched with the pedestrian in the second image is acquired in the second association relationship, determining that the pedestrian in the second image is wrong in the trunk;
Wherein, the verification server is further configured to: extracting the face characteristics of the pedestrians in the first image to obtain face characteristic values of the pedestrians in the first image; acquiring identity information of pedestrians in the first image based on the face characteristic values; carrying out association processing on pedestrians in the first image and identity information of the pedestrians; establishing a second association relationship based on the first image and identity information of pedestrians in the first image after association processing;
Wherein, the verification server is further configured to: acquiring identity information corresponding to an alternative pedestrian matched with the pedestrian in the second image in the second association relation; and generating the third alarm information based on the identity information of the candidate pedestrian.
14. A computer readable storage medium having stored therein at least one instruction that is loaded and executed by a processor to implement the luggage authentication method of any of claims 1 to 6.
CN201910577854.9A 2019-06-28 2019-06-28 Luggage case verification method, device, system and storage medium Active CN112149475B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910577854.9A CN112149475B (en) 2019-06-28 2019-06-28 Luggage case verification method, device, system and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910577854.9A CN112149475B (en) 2019-06-28 2019-06-28 Luggage case verification method, device, system and storage medium

Publications (2)

Publication Number Publication Date
CN112149475A CN112149475A (en) 2020-12-29
CN112149475B true CN112149475B (en) 2024-06-04

Family

ID=73869559

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910577854.9A Active CN112149475B (en) 2019-06-28 2019-06-28 Luggage case verification method, device, system and storage medium

Country Status (1)

Country Link
CN (1) CN112149475B (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113487257A (en) * 2021-06-30 2021-10-08 中国民航信息网络股份有限公司 Luggage management method, related device and storage medium

Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6158658A (en) * 1997-08-27 2000-12-12 Laser Data Command, Inc. System and method for matching passengers and their baggage
JP2011170527A (en) * 2010-02-17 2011-09-01 Toshiba Corp Forgetting prevention system using ticket gate, and baggage information management part and ticket gate in this system
CN104104928A (en) * 2014-08-04 2014-10-15 河海大学常州校区 Vehicle luggage storage video monitoring reminding system and vehicle using same
CN104182857A (en) * 2014-08-14 2014-12-03 深圳市威富安防有限公司 Luggage two-dimension code identification method and device
CN106161030A (en) * 2015-04-23 2016-11-23 腾讯科技(深圳)有限公司 Account based on image recognition registration checking is asked and registers verification method and device
CN106934326A (en) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 Method, system and equipment for safety inspection
CN108335390A (en) * 2018-02-02 2018-07-27 百度在线网络技术(北京)有限公司 Method and apparatus for handling information
CN109254328A (en) * 2018-02-24 2019-01-22 北京首都机场航空安保有限公司 A kind of luggage security check system
CN109740537A (en) * 2019-01-03 2019-05-10 广州广电银通金融电子科技有限公司 The accurate mask method and system of pedestrian image attribute in crowd's video image
CN109785209A (en) * 2018-12-05 2019-05-21 曾维 A kind of customs supervision method and system
CN209015233U (en) * 2018-09-26 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of prompt system for prompting safety check and a kind of safe examination system

Patent Citations (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6158658A (en) * 1997-08-27 2000-12-12 Laser Data Command, Inc. System and method for matching passengers and their baggage
JP2011170527A (en) * 2010-02-17 2011-09-01 Toshiba Corp Forgetting prevention system using ticket gate, and baggage information management part and ticket gate in this system
CN104104928A (en) * 2014-08-04 2014-10-15 河海大学常州校区 Vehicle luggage storage video monitoring reminding system and vehicle using same
CN104182857A (en) * 2014-08-14 2014-12-03 深圳市威富安防有限公司 Luggage two-dimension code identification method and device
CN106161030A (en) * 2015-04-23 2016-11-23 腾讯科技(深圳)有限公司 Account based on image recognition registration checking is asked and registers verification method and device
CN106934326A (en) * 2015-12-29 2017-07-07 同方威视技术股份有限公司 Method, system and equipment for safety inspection
CN108335390A (en) * 2018-02-02 2018-07-27 百度在线网络技术(北京)有限公司 Method and apparatus for handling information
CN109254328A (en) * 2018-02-24 2019-01-22 北京首都机场航空安保有限公司 A kind of luggage security check system
CN209015233U (en) * 2018-09-26 2019-06-21 杭州海康威视数字技术股份有限公司 A kind of prompt system for prompting safety check and a kind of safe examination system
CN109785209A (en) * 2018-12-05 2019-05-21 曾维 A kind of customs supervision method and system
CN109740537A (en) * 2019-01-03 2019-05-10 广州广电银通金融电子科技有限公司 The accurate mask method and system of pedestrian image attribute in crowd's video image

Also Published As

Publication number Publication date
CN112149475A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN109325964B (en) Face tracking method and device and terminal
US11709282B2 (en) Asset tracking systems
CN109753928B (en) Method and device for identifying illegal buildings
US11348371B2 (en) Person detection system
CN108694399B (en) License plate recognition method, device and system
KR101987618B1 (en) Vehicle license plate specific system applying deep learning-based license plate image matching technique
US20210056312A1 (en) Video blocking region selection method and apparatus, electronic device, and system
US11948366B2 (en) Automatic license plate recognition (ALPR) and vehicle identification profile methods and systems
CN111814510B (en) Method and device for detecting legacy host
EP3786836B1 (en) Article identification and tracking
CN109547748B (en) Object foot point determining method and device and storage medium
US10867511B2 (en) Apparatus and method for identifying license plate tampering
US10423817B2 (en) Latent fingerprint ridge flow map improvement
CN106650623A (en) Face detection-based method for verifying personnel and identity document for exit and entry
CN109034041B (en) Case identification method and device
CN110633642A (en) Identity information verification method and device, terminal equipment and storage medium
CN112149475B (en) Luggage case verification method, device, system and storage medium
CN106327876B (en) A kind of fake-licensed car capture system and method based on automobile data recorder
CN111126112B (en) Candidate region determination method and device
KR101240617B1 (en) Licence plate recognition system and method using dualized recognition algorithm
CN109448193A (en) Identity information recognition methods and device
US20220309809A1 (en) Vehicle identification profile methods and systems at the edge
CN110119769A (en) A kind of detection method for early warning based on multi-modal vehicle characteristics
CN110097758A (en) Information of vehicles output, storage method and device
CN103646483A (en) Compound scanning and image recording based safe picking system and method for airport luggage

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant