CN112597886A - Ride fare evasion detection method and device, electronic equipment and storage medium - Google Patents

Ride fare evasion detection method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN112597886A
CN112597886A CN202011529962.8A CN202011529962A CN112597886A CN 112597886 A CN112597886 A CN 112597886A CN 202011529962 A CN202011529962 A CN 202011529962A CN 112597886 A CN112597886 A CN 112597886A
Authority
CN
China
Prior art keywords
identity
image
target object
face
feature
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011529962.8A
Other languages
Chinese (zh)
Inventor
蒋小可
丁思杰
鲍纪奎
季聪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Chengdu Sensetime Technology Co Ltd
Original Assignee
Chengdu Sensetime Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Chengdu Sensetime Technology Co Ltd filed Critical Chengdu Sensetime Technology Co Ltd
Priority to CN202011529962.8A priority Critical patent/CN112597886A/en
Publication of CN112597886A publication Critical patent/CN112597886A/en
Priority to PCT/CN2021/086701 priority patent/WO2022134388A1/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/161Detection; Localisation; Normalisation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/22Matching criteria, e.g. proximity measures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/25Determination of region of interest [ROI] or a volume of interest [VOI]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/103Static body considered as a whole, e.g. static pedestrian or occupant recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • G06V40/16Human faces, e.g. facial parts, sketches or expressions
    • G06V40/168Feature extraction; Face representation

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Multimedia (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Evolutionary Computation (AREA)
  • Oral & Maxillofacial Surgery (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Engineering & Computer Science (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Molecular Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Computational Linguistics (AREA)
  • Biophysics (AREA)
  • Biomedical Technology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Evolutionary Biology (AREA)
  • Image Analysis (AREA)

Abstract

The disclosure relates to a ride fare evasion detection method and device, electronic equipment and a storage medium, wherein the method comprises the following steps: acquiring a first identity characteristic obtained by identifying a first image of a target object; acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired at different shooting angles aiming at a target object; identifying the second image to obtain a second identity characteristic of the second image; associating the first identity feature and the second identity feature; and identifying the target object based on the associated second identity characteristic, and confirming that the target object does not run away when the identity of the target object is successfully identified. The method and the device can improve the accuracy of train fare evasion identification.

Description

Ride fare evasion detection method and device, electronic equipment and storage medium
Technical Field
The present disclosure relates to the field of computer technologies, and in particular, to a method and an apparatus for detecting a fare evasion by taking a bus, an electronic device, and a storage medium.
Background
In recent years, face recognition technology is rapidly developed in various industries, and some cities use the combination of the face recognition technology and the traditional ticket selling and checking mode to realize the non-inductive passing of users in and out of the station. When a user enters and exits the station, the user can confirm the identity of the user by identifying the face image of the user without stopping or checking tickets, so that the corresponding fee can be automatically deducted from the account of the user. Compared with the traditional ticket selling and checking mode, the mode can better improve the ticket selling and checking speed and reduce the jam condition in the peak time.
Some events such as ticket evasion behaviors and the like which violate the equity interests of other users may exist in a non-sensory traffic scene, and in order to identify the events, field personnel are required to perform manual identification, so that human resources are increased, and the phenomenon of missed detection is easy to occur.
Disclosure of Invention
The present disclosure provides a ride fare evasion detection technical scheme.
According to an aspect of the present disclosure, there is provided a ride fare evasion detection method, including:
acquiring a first identity characteristic obtained by identifying a first image of a target object; acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired at different shooting angles aiming at a target object; identifying the second image to obtain a second identity characteristic of the second image; associating the first identity feature and the second identity feature; and identifying the target object based on the associated second identity characteristic, and confirming that the target object does not run away when the identity of the target object is successfully identified.
In some possible implementations, the method further includes: and identifying the target object based on the associated second identity characteristic, and confirming the target object to flee the ticket when the identity of the target object cannot be identified.
In some possible implementations, the method further includes: confirming the target object is free from the ticket when the first identity feature cannot be identified from the first image.
In some possible implementations, the method further includes: receiving an infrared signal, wherein the infrared signal is triggered when the target object leaves an identification area of identity recognition; and acquiring and storing pictures and/or videos of the target object passing through the identification area according to the receiving time of the infrared signal.
In some possible implementations, the method further includes: searching for an identity recognition record in a preset time period of the receiving time; and confirming the target object ticket evasion under the condition that the target object identification record does not exist in the preset time period.
In some possible implementations, the occlusion condition includes at least one of: the first identity feature does not contain a face feature; the quality score of the face corresponding to the first identity characteristic is smaller than a preset quality score threshold value; and the shielding area of the face corresponding to the first identity characteristic is larger than a preset area threshold value.
In some possible implementations, the recognizing the second image to obtain the second identity characteristic of the second image includes: detecting the human face and the human body of the second image to obtain a detection result of at least one first object; and screening the detection result of the at least one first object according to a preset screening condition to obtain a second identity characteristic of the target object.
In some possible implementations, the screening condition includes at least one of: the position of the first object is in a preset identification area; the image area of the first object in the second image is maximum; the first object is closest to a preset object.
In some possible implementation manners, the performing face and human detection on the second image to obtain a detection result of at least one first object includes: carrying out face detection and human body detection on the second image to obtain at least one face and at least one human body; and associating at least one face and at least one human body of the second image to obtain a detection result of the at least one first object.
In some possible implementations, the associating the first identity characteristic and the second identity characteristic includes: matching the human features included in the first identity features with the human features included in the second identity features; associating the first identity feature with the second identity feature if the physical characteristic of the first identity feature matches the physical characteristic of the second identity feature.
In some possible implementations, the method further includes: and under the condition that the human body features of the first identity features are not matched with the human body features of the second identity features, acquiring a second image again based on the acquisition time of the first image so as to correlate the first identity features with the second identity features of the second image, wherein the correlation times are preset or the human body features of the first identity features are matched with the human body features of the second identity features.
In some possible implementations, the identifying the target object based on the associated second identity feature includes: and identifying the target object based on the face features included in the associated second identity features.
In some possible implementations, the method further includes: and when the identity of the target object is successfully recognized, generating and storing an identity recognition record of the target object, wherein the identity recognition record comprises one or more of recognition time, user information and a recognition place.
In some possible implementations, the first image is acquired at a first location at a shooting angle toward the target object, and the second image is acquired at a second location at a shooting angle toward the target object, wherein the second location is above the first location.
According to an aspect of the present disclosure, there is provided a ride ticket evasion detection apparatus including:
the first acquisition module is used for acquiring a first identity characteristic obtained by identifying a first image of a target object;
the second acquisition module is used for acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired by aiming at a target object at different shooting angles;
the identification module is used for identifying the second image to obtain a second identity characteristic of the second image;
an association module for associating the first identity characteristic with the second identity characteristic;
and the determining module is used for identifying the target object based on the associated second identity characteristic and confirming that the target object does not run away when the identity of the target object is successfully identified.
In some possible implementations, the determining module is further configured to identify the target object based on the associated second identity feature, and confirm that the target object escapes when the identity of the target object cannot be identified.
In some possible implementations, the determining module is further configured to confirm that the target object is free from the first image when the first identity feature is not recognized from the first image.
In some possible implementations, the method further includes: the infrared triggering module is used for receiving an infrared signal, wherein the infrared signal is triggered when the target object leaves an identification area for identity identification; and acquiring and storing pictures and/or videos of the target object passing through the identification area according to the receiving time of the infrared signal.
In some possible implementations, the method further includes: the infrared trigger module is also used for searching the identity recognition record in the preset time period of the receiving time; and confirming the target object to flee the ticket under the condition that the identification record of the target object does not exist in the preset time period.
In some possible implementations, the occlusion condition includes at least one of: the first identity feature does not contain a face feature; the quality score of the face corresponding to the first identity characteristic is smaller than a preset quality score threshold value; and the shielding area of the face corresponding to the first identity characteristic is larger than a preset area threshold value.
In some possible implementation manners, the recognition module is configured to perform face and human detection on the second image to obtain a detection result of at least one first object; and screening the detection result of the at least one first object according to a preset screening condition to obtain a second identity characteristic of the target object.
In some possible implementations, the screening condition includes at least one of: the position of the first object is in a preset identification area; the image area of the first object in the second image is maximum; the first object is closest to a preset object.
In some possible implementation manners, the recognition module is configured to perform face detection and human body detection on the second image to obtain at least one face and at least one human body; and associating at least one face and at least one human body of the second image to obtain a detection result of the at least one first object.
In some possible implementations, the association module is configured to match a human body feature included in the first identity feature with a human body feature included in the second identity feature; associating the first identity feature with the second identity feature if the physical characteristic of the first identity feature matches the physical characteristic of the second identity feature.
In some possible implementation manners, the association module is further configured to, when the human body characteristic of the first identity characteristic does not match the human body characteristic of the second identity characteristic, based on the acquisition time of the first image, re-acquire the second image to associate the first identity characteristic with the second identity characteristic of the re-acquired second image until a preset number of times of association is reached or the human body characteristic of the first identity characteristic matches the human body characteristic of the second identity characteristic.
In some possible implementations, the determining module is configured to identify the target object based on a face feature included in the associated second identity feature.
In some possible implementations, the method further includes: the generating module is used for generating and storing an identification record of the target object when the identity of the target object is successfully identified, wherein the identification record comprises one or more items of identification time, user information and identification place.
In some possible implementations, the first image is acquired at a first location at a shooting angle toward the target object, and the second image is acquired at a second location at a shooting angle toward the target object, wherein the second location is above the first location.
According to an aspect of the present disclosure, there is provided an electronic device including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
According to an aspect of the present disclosure, there is provided a computer readable storage medium having stored thereon computer program instructions which, when executed by a processor, implement the above-described method.
In the embodiment of the disclosure, a first identity characteristic obtained by identifying a first image of a target object may be acquired, and a second image is acquired when the first identity characteristic meets a preset occlusion condition. And then, associating the first identity characteristic obtained by the first image with the second identity characteristic obtained by the second image, identifying the target object based on the associated second identity characteristic, and confirming that the target object does not flee the ticket when the identity of the target object is successfully identified, so that under the condition that the target object cannot be identified through the first image, the identity of the target object can be identified through the second image associated with the first image, the condition that the identity cannot be identified due to the fact that the target object intentionally shields the face and the like can be reduced, and the accuracy of train fare evasion behavior identification is improved.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the disclosure. Other features and aspects of the present disclosure will become apparent from the following detailed description of exemplary embodiments, which proceeds with reference to the accompanying drawings.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the present disclosure and, together with the description, serve to explain the principles of the disclosure.
Fig. 1 shows a flow chart of a ride fare evasion detection method according to an embodiment of the disclosure.
Fig. 2 shows a schematic of a first position and a second position according to an embodiment of the present disclosure.
Fig. 3 shows a flowchart of an example of a ride fare evasion detection method according to an embodiment of the present disclosure.
Fig. 4 shows a block diagram of a ride fare evasion detection device according to an embodiment of the present disclosure.
Fig. 5 shows a block diagram of an example of an electronic device in accordance with an embodiment of the present disclosure.
FIG. 6 shows a block diagram of an example of an electronic device in accordance with an embodiment of the present disclosure.
Detailed Description
Various exemplary embodiments, features and aspects of the present disclosure will be described in detail below with reference to the accompanying drawings. In the drawings, like reference numbers can indicate functionally identical or similar elements. While the various aspects of the embodiments are presented in drawings, the drawings are not necessarily drawn to scale unless specifically indicated.
The word "exemplary" is used exclusively herein to mean "serving as an example, embodiment, or illustration. Any embodiment described herein as "exemplary" is not necessarily to be construed as preferred or advantageous over other embodiments.
The term "and/or" herein is merely an association describing an associated object, meaning that three relationships may exist, e.g., a and/or B, may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the term "at least one" herein means any one of a plurality or any combination of at least two of a plurality, for example, including at least one of A, B, C, and may mean including any one or more elements selected from the group consisting of A, B and C.
Furthermore, in the following detailed description, numerous specific details are set forth in order to provide a better understanding of the present disclosure. It will be understood by those skilled in the art that the present disclosure may be practiced without some of these specific details. In some instances, methods, means, elements and circuits that are well known to those skilled in the art have not been described in detail so as not to obscure the present disclosure.
The fare evasion detection scheme by bus that this disclosed embodiment provided can be applied to the scene of the scenes such as track traffic, scenic spot in the fare evasion action recognition scene by bus, for example, in the scene of subway entrance, can shoot the passenger of entering the bus through different shooting angles, and discern two images that different shooting angles shot, thereby even if the passenger of entering the bus avoids the camera maliciously or shelter from facial emergence fare evasion action on purpose, unable discernment passenger in an image, also can carry out identification to this passenger through the image of another shooting angle, thereby reduce the emergence of the fare evasion action, human resources are saved, reduce the missed checking phenomenon that causes because of people's fatigue. The fare evasion detection scheme for taking a bus provided by the embodiment of the disclosure is also suitable for the identification scene with large pedestrian flow, so that fare evasion behavior identification requirements under various scenes are met.
The ride fare evasion detection method provided by the embodiment of the present disclosure may be executed by a terminal device, a server, or other types of electronic devices, where the terminal device may be a User Equipment (UE), a mobile device, a User terminal, a cellular phone, a cordless phone, a Personal Digital Assistant (PDA), a handheld device, a computing device, an in-vehicle device, a wearable device, or the like. In some possible implementations, the data processing method may be implemented by a processor calling computer readable instructions stored in a memory. Alternatively, the method may be performed by a server. The following describes a ride-through fare detection method according to an embodiment of the present disclosure, taking an electronic device as an execution subject.
Fig. 1 shows a flowchart of a ride fare evasion detection method according to an embodiment of the present disclosure, and as shown in fig. 1, the ride fare evasion detection method includes:
step S11, a first identity feature obtained by recognizing the first image of the target object is obtained.
In embodiments of the present disclosure, an electronic device may acquire a first image acquired for a target object. The target object may be an object currently needing identification in the shooting scene. A plurality of objects may be included in the shooting scene, and the objects may be pedestrians, users, passengers, and the like. In some implementations, the electronic device may have a shooting function, and may shoot the target object at a certain shooting angle to obtain a first image acquired at a first shooting angle. In some implementations, the electronic device can acquire a first image captured by a first camera at a first camera angle. In some implementations, the first image may also be a video frame in a video, e.g., the electronic device may acquire a first video captured by the first camera at a first camera angle and may then extract a video frame in the first video as the first image. For example, the electronic device may decode a frame image into a single frame in a 1080p video stream of the first video and convert to RGB format resulting in the first image and/or the second image.
In the embodiment of the present disclosure, the first image may be identified, for example, the first image may be identified through some identification algorithm or a neural network, so as to obtain the identified first identity feature. In some implementations, the face and the human body in the first image may be identified, and the obtained first identity feature includes a face feature and/or a human body feature. In some implementations, the electronic device may also directly obtain the first identity feature obtained by recognizing the first image of the target object through another device. The first identity feature may be a feature for representing an identity of the target object, and the first identity feature may include a human face feature and/or a human body feature.
In some implementations, the target object may be confirmed to be out of ticket when the first identity feature cannot be identified from the first image. Here, in the case of acquiring the first image, the target object may be identified by the first image, that is, the first image may be identified. If the first identity feature of the target object cannot be identified from the first image, it may be considered that the target object cannot be identified through the first image, i.e. that the target object is confirmed to be out of fare.
Step S12, acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired at different shooting angles aiming at a target object;
in the embodiment of the disclosure, it may be determined whether the first identity feature meets a preset occlusion condition, and if the first identity feature meets the preset occlusion condition, it may be considered that the target object cannot be identified through the first identity feature, so that the associated second image may be further obtained, and the second image is used to perform identity verification on the target object.
Here, the first image and the second image are acquired at different shooting angles with respect to the target object, and the shooting angle may be understood as a relative angle of view at which the shooting device acquires the image with respect to the target object, for example, a shooting angle of a flat shot, a tilt shot, or the like. The electronic device may acquire a second image captured by the camera at a second shooting angle, where the first shooting angle and the second shooting angle are different. The second image may also be a video frame in a video.
In one example, the first image is acquired at a first location at a photographing angle toward the target object and the second image is acquired at a second location at a photographing angle toward the target object, such that the first and second images may be directed to images acquired of a front side of the target object. Here, the second position is located above the first position, and the first image and the second image can respectively show the front of the target object at different viewing angles, so that even if a certain shielding exists on the front of the target object at one shooting angle, the front of the target object can be exposed at another shooting angle, and further, the identification of the target object can be realized through the front image of the target object.
Fig. 2 shows a schematic of a first position and a second position according to an embodiment of the present disclosure. The first position and the second position may be as shown in fig. 2, and photographing devices may be provided at the first position and the second position, respectively, and may photograph facing the target object, to obtain a first image and a second image, respectively. For example, when a target object enters a subway station, the photographing devices disposed at the first position and the second position may respectively photograph the target object, resulting in a first image captured at the first position toward the target object and a second image captured at the second position toward the target object.
In some implementations, the preset occlusion condition may include at least one of: the first identity characteristic does not contain a human face characteristic, the human face quality corresponding to the first identity characteristic is smaller than a preset quality threshold value, and the shielding area of the human face corresponding to the first identity characteristic is larger than a preset area threshold value. And under the condition that the first identity characteristic does not contain the face characteristic, the identity of the target object cannot be identified through the first identity characteristic. Under the condition that the face quality corresponding to the first identity characteristic is smaller than a preset quality threshold, the face quality of the target object provided by the first image is considered to be low, and the target object may not be identified through the face of the first image. Under the condition that the shielding area of the face corresponding to the first identity feature is larger than the preset area threshold, the face of the target object provided by the first image is considered to be incomplete, and key areas of the face, such as eyes, a nose, a mouth and the like of the person, may be shielded, so that the target object cannot be identified through the face in the first image. The preset shielding condition can be used as a prior condition for judging whether the first identity characteristic can identify the target object, and the judgment can be quickly made through the preset shielding condition, so that the identification efficiency is improved.
Step S13, recognizing the second image to obtain a second identity characteristic of the second image;
in the embodiment of the present disclosure, the second image may be identified, for example, the second image may be identified through some identification algorithm or a neural network, so as to obtain the identified second identity feature. In some implementations, the face and the human body in the second image may be identified, and the obtained second identity feature includes a face feature and/or a human body feature.
Step S14, associating the first identity characteristic with the second identity characteristic;
in this embodiment of the disclosure, the first identity characteristic may be matched with the second identity characteristic, and if the first identity characteristic and the second identity characteristic are matched, the first identity characteristic and the second identity characteristic may be considered to belong to the same target object, that is, the first identity characteristic and the second identity characteristic may be associated as identity characteristics of the same target object.
In some implementations, the body features included in the first identity feature can be matched with the body features included in the second identity feature, and the body features of the first identity feature and the face features of the second identity feature can be associated when the body features of the first identity feature are matched with the body features of the second identity feature.
Here, since the first identity feature may not provide a face feature capable of performing identity recognition, the human body feature included in the first identity feature may be matched with the human body feature included in the second identity feature, for example, a distance between the human body feature of the first identity feature and the human body feature of the second identity feature, such as an euclidean distance, a cosine distance, and the like, is calculated, and when the distance is smaller than or equal to a preset value, it is determined that the human body feature of the first identity feature matches with the human body feature of the second identity feature, that is, the first identity feature and the second identity feature belong to the same target object. The human face features of the first identity features and the human face features of the second identity features can be further associated, that is, the human face features included in the second identity features and the human face features of the first identity features are associated into the human face features and the human body features of the same target object, so that the association between the first identity features and the second identity features is realized, and identity recognition can be performed through the human face features included in the second identity features.
Here, when the distance between the human body feature of the first identity feature and the human body feature of the second identity feature is greater than the preset value, it may be considered that the human body feature of the first identity feature is not matched with the human body feature of the second identity feature, that is, it may be considered that the first identity feature and the second identity feature belong to different target objects, and the first identity feature and the second identity feature cannot be associated with each other.
Accordingly, under the condition that the human body characteristics of the first identity characteristics are not matched with the human body characteristics of the second identity characteristics, the second image can be obtained again based on the acquisition time of the first image, so that the first identity characteristics and the second identity characteristics of the obtained second image are associated until the preset association times are reached or the human body characteristics of the first identity characteristics are matched with the human body characteristics of the second identity characteristics. For example, a preset time period in which the acquisition time of the first image is located may be determined, and then the second image acquired within the preset time period may be acquired again, that is, it may be understood that the second image within a period of time before and after the acquisition time of the first image may be acquired, so that the second image having the same target object as the first image is acquired as much as possible. Here, the preset time period and the preset number of times of association may be set according to an actual application scenario, for example, the preset time period may be set to be 20s, 30s, and the like, and the number of times of association may be set to be 3 to 10, and the like.
Step S15, recognizing the target object based on the associated second identity feature, and confirming that the target object has not escaped a ticket when the identity of the target object is successfully recognized.
In the embodiment of the present disclosure, the face feature of the target object may be determined in the second identity feature associated with the first identity feature, and then the target object is subjected to identity recognition based on the face feature included in the associated second identity feature, so as to determine the identity information of the target object. Upon successful identification of the target object's identity, the target object may be confirmed as not having escaped a ticket. For example, in a subway station-entering scene, the face features included in the second identity features associated with the first identity features may be compared with the face features of pre-stored passengers in the database, and when the face features included in the second identity features are matched with the face features of a pre-stored passenger, the target object may be considered as the pre-stored passenger, and further the identity information of the target object may be obtained, so that the target object is successfully identified, and it is determined that the target object does not flee for a ticket.
Here, the identity information may include information such as a face image, a user identification number, and an associated account number. In some implementations, in the case of successfully recognizing the identity of the target object, an identification record of the target object may be generated and saved, and the identification record may include one or more of a recognition time, user information, and a recognition location, so that an information basis may be provided for subsequently invoking relevant information of the target object. For example, in a rail transit scene, when a target object enters a subway station, the target object may be subjected to identity recognition, and when the identity of the target object is successfully recognized, a riding record (identity recognition record) of the target object may be generated and stored.
In some implementation manners, after the identity information of the target object is determined in a rail transit scene, a certain fee can be automatically deducted from a related account number included in the identity information according to the consumption information of the target object, so that the user can pass in and out of the station without feeling.
In some implementations, the target object is identified based on the associated second identity characteristic, and when the identity of the target object cannot be identified, the target object is confirmed to flee for the ticket. For example, the face features included in the second identity features associated with the first identity features may be compared with the face features of the pre-stored passenger in the database, and when the face features included in the second identity features do not match with the face features of the pre-stored passenger, it may be considered that the identity of the target object cannot be identified, and it may be considered that the target object escapes.
In some implementation manners, an infrared generating device may be further disposed at a departure boundary of a recognition area where the target object is subjected to identity recognition, and when the target object departs from the recognition area, an infrared signal may be triggered, so that the identity recognition may be assisted by the infrared signal. When the target object leaves the identification area, the electronic equipment can receive the infrared signal, and then acquire and store pictures and/or videos of the target object passing through the identification area according to the receiving time of the infrared signal. If pictures and/or videos shot at different shooting angles in a preset time period are obtained, the pictures and/or videos can be used as a subsequent target object ticket evasion, a false deduction complaint handling or a field certificate.
In one example, after receiving the infrared signal, the identification record in the preset time period of the receiving time of the infrared signal may also be searched, that is, the identification record stored in a period of time before and after the receiving time may be searched. If the identification record does not exist within the preset time period, it indicates that the target object leaving the identification area exists at the receiving time, but the identification record of the target object does not exist, the identification of the target object is considered to be failed, or the target object intentionally avoids the detection of identification, and confirms that the target object with the ticket evasion exists. Further, pictures and/or videos collected in the preset time period can be obtained and stored, and the pictures and/or videos can be used as tickets for escape of subsequent target objects or field certificates. In the subway station-entering scene, pictures and/or videos of the station-entering site can be stored in an infrared signal triggering mode, and the scene reappearance during fare evasion research and judgment is assisted.
In some implementations, when the authentication record exists within the preset time period, it indicates that the target object leaving the identification area exists at the receiving time, and the authentication record of the target object exists, the authentication of the target object may be considered to be successful, and the video collected within the preset time period may be further acquired and saved. Namely, the target object can trigger the infrared signal no matter enters the subway station in normal passage or the subway station in the process of ticket evasion.
The identity of the passenger can be confirmed through pictures shot from different angles of the passenger in a non-sensitive passing scene, a ticket checking gate does not need to be arranged, and the accuracy of identity identification of the passenger in the non-sensitive passing scene is improved.
In step S13, the second image may be recognized to obtain a second identity feature of the second image. An implementation manner for recognizing the second image to obtain the second identity feature of the second image is provided below.
In some implementations, the face and human body detection may be performed on the second image to obtain the detection result of the at least one first object, for example, the second image may be input to a convolutional neural network for face and human body detection, the face and human body detection may be performed on the image by using the convolutional neural network, and the detection result of the at least one first object may be output by the convolutional neural network. The first object may be a pedestrian, a passenger, or the like, and the target object may be included in the plurality of first objects. Since the detection result of the at least one first object can be obtained after the face and human body detection is performed, the target object also needs to be determined in the at least one first object, so that in the process of performing the face and human body detection on the second image, the detected face and human body can be filtered by using the preset screening condition, and the situation of target object detection errors is reduced. Therefore, the detection result of at least one first object can be screened according to the preset screening condition, the detection result of at least one first object is filtered, namely, a plurality of faces and human bodies detected in the second image are screened, the faces and the human bodies which do not belong to the target object are filtered, and the detection result of the target object is screened. Then, according to the detection result of the target object, the second identity feature of the target object can be extracted from the image area where the target object is located. Under the condition that the detection result of the first image comprises a plurality of faces and human bodies, the faces and the human bodies which do not belong to the target object can be rapidly filtered through the preset screening condition, so that the target object can be accurately determined in the plurality of first objects.
Here, the detection result of the face and body detection may include detection frames and position information of the face and body of the same object. The detection frame can be used for identifying the face and the human body of one object in the image, so that the detection frame can visually identify the face and the human body of the same object. The position information may indicate a position where an object is located, and the position may be an image position, or the position may also be a spatial position in a world coordinate system, for example, the image position of an object may be converted into the spatial position in the world coordinate system according to a conversion relationship between the image coordinate and the world coordinate. In some implementation manners, the detection result may further include information such as the quality of the human face and the human body, the size of the detection frame, an object identification number (ID), and a human face shielding area. The quality of the face and the human body may be used as a prior condition for whether identity recognition may be performed, for example, if the face quality of the target object is less than a certain quality threshold, it may be considered that the face quality of the target object provided by the first image or the second image is low, identity recognition may not be performed on the target object through the image, and the first image or the second image of the target object may be obtained again. The size of the detection frame may include the length and width of the detection frame, and in some examples, may also include the area of the image region in which the detection frame is located. The object identification number may be an identification number unique to one object, and in some examples, the detection results of the plurality of first images or the second images may be tracked, for example, the detection results of the plurality of first images or the second images may be matched according to the human face features, the detection result of the target object in the plurality of images may be determined, and the same object identification number may be set for the plurality of detection results of the target object. The face shielding area may be an area of a face shielded region, and the face shielding area may be a prior condition of whether identity recognition may be performed, for example, if the face shielding area of the target object is greater than a certain area threshold, it may be considered that the face of the target object provided by the image may not perform identity recognition on the target object, and the first image or the second image of the target object may be obtained again.
Here, the filtering condition may be set according to an actual application scenario, and the present disclosure does not limit a specific filtering condition. In a rail transit scene, the screening condition comprises one or more of the position of the first object in a preset identification area, the image area of the first object in the second image is maximum, and the distance between the first object and the preset object is nearest. The process of identifying the first image to obtain the first identity characteristic of the first image may be the same as the process of obtaining the second identity characteristic of the second image, and details are not repeated here.
In some implementations, the target object may be a passenger entering a subway station. When the target object enters the subway station, the target object can be shot to obtain a first image and/or a second image. For the detection results of a plurality of first objects in any one of the first image and the second image, whether the first object is in a preset identification area can be judged according to the detection result of each first object, the preset identification area can be a card punching area of a subway station, and the target object can be in the card punching area, so that whether the first object is the target object can be judged according to whether the first object is in the card punching area. Since the first image and the second image may be captured toward the target object and, in order to clearly capture the face of the target object, the photographing device is generally disposed in the vicinity of the card punching area, so that the image area of the target object in the image is generally the largest among the plurality of first objects. Therefore, it is possible to determine whether the area of the image region where the first object is located is the largest of the plurality of first objects, based on the detection result of the at least one first object, and if the area of the image region where the one first object is located is the largest, the first object may be the target object. In some implementations, the target object may be closest to a preset object, such as a target object closest to a preset object, such as a shooting device, a card punching device, or the like, so that whether the first object is closest to the preset object may be determined according to a detection result of at least one first object, and if one first object is closest to the preset object, the first object may be the target object.
By setting the screening conditions to be adaptive to the application scene, the target object for identity recognition can be quickly determined in the plurality of first objects, so that the identity recognition efficiency is improved, and the accuracy of identity recognition is improved.
In some examples, when the face and body detection is performed on the second image to obtain the detection result of the at least one first object, the face detection and the body detection may be performed on the second image to obtain the at least one face and the at least one body, respectively. Then, at least one face of the second image is associated with at least one human body, so that a detection result of at least one first object can be obtained. For example, at least one face and at least one body may be associated according to a distance between the face and the body, a body posture formed by the face and the body, and the like, so that the distance between the face and the body after association is smaller than a distance threshold, and meanwhile, the body formed by the face and the body conforms to a normal body posture and the like. The face and human body detection can be realized by associating at least one face and at least one human body in the second image, so that a basis is provided for obtaining the accurate second identity characteristic of the target object.
The following describes a method for detecting a fare evasion by taking the identification of a passenger entering a subway station in a subway scene as an example. Fig. 3 is a flowchart illustrating an example of a ride fare evasion detection method according to an embodiment of the present disclosure, including the following steps:
s201, acquiring a first image, wherein the first image is acquired at a first position at a shooting angle facing the target object, and the first position is located at a channel of an identification area of identity recognition.
Here, in a scene in which the target object enters the subway station, the first image may be a video frame extracted from a first video captured by the passage capture device at the first position. The electronic equipment can directly read the video frame of the first video, so that the first image acquisition time is saved, and the time delay of the first image reading is reduced.
S202, detecting the face and the human body of the first image to obtain a first identity characteristic.
Here, the first image may be input to a neural network, and the neural network is used to sequentially perform face-to-body detection, face-to-body matching, and target tracking on the first image, so as to obtain detection results of a plurality of first objects, where the detection results may include information such as quality, position, size, and object ID of the face and the body. Further, the plurality of detection results may be screened out to screen out the detection result of the first object in the identification area, and it is determined that the first object in the identification area is the target object. Further, the face features and the human body features of the target object can be extracted to be used as the first identity features of the target object.
And S203, judging whether the first image has an identifiable face or not according to the first identity characteristic.
And S204, under the condition that the recognizable face exists in the first image, carrying out identity recognition on the target object based on the face of the target object in the first image.
And S205, under the condition that the recognizable face does not exist in the first image, acquiring a second image, wherein the second image is acquired at a second position at a shooting angle facing the target object, and the second position is positioned at an elevated position opposite to the recognition area.
Here, the second image may be a video frame extracted from a second video captured by an overhead camera (e.g., an overhead camera) at the second position. The electronic device may obtain a video stream of the second video output by the overhead camera, such as a Real Time Streaming Protocol (RTSP) video stream of the second video, and extract the second image from the video stream of the second video.
And S206, detecting the human face and the human body of the second image to obtain a second identity characteristic.
Here, the second image may be input to a neural network, and the neural network may be used to detect a human face and a human body in the second image, and match the human face and the human body, so as to obtain detection results of a plurality of first objects in the second image. Further, the plurality of first objects may be screened to screen out the target object located in the recognition area, and a second identity of the target object in the second image may be further determined. In some implementations, the second image may be reacquired in a video stream of the second video in the absence of a target object located in the identified region in the second image.
In some implementations, the above steps S205 to S206 may be performed independently, that is, the human face and body detection process of the first image and the human face and body detection process of the second image may be performed independently, and the plurality of second images of the second video may continuously provide the second identity feature detected in the identification area, so as to provide associable human face and body information for the reimbursement card verification. The first identity characteristic and the second identity characteristic can be stored in a data caching mode, so that the time delay when the first identity characteristic and the second identity characteristic are associated can be reduced.
S207, associating the first identity characteristic with the second identity characteristic, identifying the target object based on the associated second identity characteristic, and confirming that the target object does not run away when the identity of the target object is successfully identified.
Here, the identification is performed based on the facial features in the second identification feature, and if the facial features of the second identification feature cannot identify the identity of the target object, the target object may be considered as a passenger skipping a ticket.
And S208, receiving the infrared signal, searching for an identification record in a preset time period of the receiving time according to the receiving time of the infrared signal, and acquiring and storing the video acquired in the preset time period when the passenger escapes under the condition that the identification record does not exist in the preset time period.
It should be noted that after the face of the target object is acquired through the first image or the second image, the face thumbnail corresponding to the target object may be compared with the passenger data in the database, so as to identify the identity information of the target object. And then generating an identification record (such as a riding record) according to the identity information of the target object and deducting the fee.
According to the bus fare evasion detection scheme provided by the embodiment of the disclosure, the identity of the target object can be identified through the images acquired at different shooting angles, so that even if the image acquired at one shooting angle does not include the face of the target object or the face is shielded, the identity of the target object can be identified by the image acquired at another shooting angle, and the fare evasion probability can be reduced in a subway entrance scene. In addition, this disclosure still adopts infrared signal to trigger the detection to the passenger that arrives at a station, has higher detection success rate to the action of fleing a ticket, can regard as the basis of judging the action of fleing a ticket, and is more accurate to the judgement of the action of fleing a ticket. The embodiment of the disclosure can be applied to a non-inductive passing scene, the ticket checking gate machine can not be arranged in the non-inductive passing scene, the identity of a passenger can be confirmed through pictures shot at different angles of the passenger, and the accuracy of the identity identification of the passenger in the non-inductive passing scene is improved.
It is understood that the above-mentioned method embodiments of the present disclosure can be combined with each other to form a combined embodiment without departing from the logic of the principle, which is limited by the space, and the detailed description of the present disclosure is omitted. Those skilled in the art will appreciate that in the above methods of the specific embodiments, the specific order of execution of the steps should be determined by their function and possibly their inherent logic.
In addition, the present disclosure also provides a ride ticket evasion detection device, an electronic device, a computer-readable storage medium, and a program, which can be used to implement any one of the ride ticket evasion detection methods provided by the present disclosure, and the corresponding technical solutions and descriptions and corresponding descriptions in the method section are omitted for brevity.
Fig. 4 is a block diagram illustrating a ride escape detection apparatus according to an embodiment of the present disclosure, which includes, as shown in fig. 4:
a first obtaining module 31, configured to obtain a first identity characteristic obtained by identifying a first image of a target object;
a second obtaining module 32, configured to obtain a second image when the first identity characteristic meets a preset occlusion condition, where the first image and the second image are collected at different shooting angles for a target object;
the identification module 33 is configured to identify the second image to obtain a second identity feature of the second image;
an association module 34 for associating the first identity characteristic with the second identity characteristic;
a determining module 35, configured to identify the target object based on the associated second identity feature, and confirm that the target object is not billed when the identity of the target object is successfully identified.
In some possible implementations, the method further includes: the determining module 35 is further configured to identify the target object based on the associated second identity feature, and confirm that the target object escapes when the identity of the target object cannot be identified.
In some possible implementations, the method further includes: the determining module 35 is further configured to confirm that the target object is free from ticket when the first identity feature cannot be identified from the first image.
In some possible implementations, the method further includes: the infrared triggering module is used for receiving an infrared signal, wherein the infrared signal is triggered when the target object leaves an identification area for identity identification; and acquiring and storing pictures and/or videos of the target object passing through the identification area according to the receiving time of the infrared signal.
In some possible implementations, the method further includes: the infrared trigger module is also used for searching the identity recognition record in the preset time period of the receiving time; and confirming the target object to flee the ticket under the condition that the identification record of the target object does not exist in the preset time period.
In some possible implementations, the occlusion condition includes at least one of: the first identity feature does not contain a face feature; the quality score of the face corresponding to the first identity characteristic is smaller than a preset quality score threshold value; and the shielding area of the face corresponding to the first identity characteristic is larger than a preset area threshold value.
In some possible implementation manners, the recognition module 33 is configured to perform face and human detection on the second image to obtain a detection result of at least one first object; and screening the detection result of the at least one first object according to a preset screening condition to obtain a second identity characteristic of the target object.
In some possible implementations, the screening condition includes at least one of: the position of the first object is in a preset identification area; the image area of the first object in the second image is maximum; the first object is closest to a preset object.
In some possible implementations, the recognition module 33 is configured to perform face detection and human body detection on the second image to obtain at least one face and at least one human body; and associating at least one face and at least one human body of the second image to obtain a detection result of the at least one first object.
In some possible implementations, the association module 34 is configured to match a human body feature included in the first identity feature with a human body feature included in the second identity feature; associating the first identity feature with the second identity feature if the physical characteristic of the first identity feature matches the physical characteristic of the second identity feature.
In some possible implementations, the associating module 34 is further configured to, in a case that the human body feature of the first identity feature matches the human body feature of the second identity feature, based on the acquisition time of the first image, re-acquire the second image to associate the first identity feature with the second identity feature of the re-acquired second image until a preset number of times of association is reached or the human body feature of the first identity feature matches the human body feature of the second identity feature.
In some possible implementations, the determining module 35 is configured to identify the target object based on a face feature included in the associated second identity feature.
In some possible implementations, the method further includes: the generating module is used for generating and storing an identification record of the target object when the identity of the target object is successfully identified, wherein the identification record comprises one or more items of identification time, user information and identification place.
In some possible implementations, the first image is acquired at a first location at a shooting angle toward the target object, and the second image is acquired at a second location at a shooting angle toward the target object, wherein the second location is above the first location.
In some embodiments, functions of or modules included in the apparatus provided in the embodiments of the present disclosure may be used to execute the method described in the above method embodiments, and specific implementation thereof may refer to the description of the above method embodiments, and for brevity, will not be described again here.
Embodiments of the present disclosure also provide a computer-readable storage medium having stored thereon computer program instructions, which when executed by a processor, implement the above-mentioned method. The computer readable storage medium may be a non-volatile computer readable storage medium.
An embodiment of the present disclosure further provides an electronic device, including: a processor; a memory for storing processor-executable instructions; wherein the processor is configured to invoke the memory-stored instructions to perform the above-described method.
The disclosed embodiments also provide a computer program product comprising computer readable code, which when run on a device, executes instructions for implementing a ride fare evasion detection method as provided in any of the above embodiments.
The disclosed embodiments also provide another computer program product for storing computer readable instructions, which when executed, cause a computer to perform the operations of the ride fare evasion detection method provided in any of the above embodiments.
The electronic device may be provided as a terminal, server, or other form of device.
Fig. 5 illustrates a block diagram of an electronic device 800 in accordance with an embodiment of the disclosure. For example, the electronic device 800 may be a mobile phone, a computer, a digital broadcast terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, or the like terminal.
Referring to fig. 5, electronic device 800 may include one or more of the following components: processing component 802, memory 804, power component 806, multimedia component 808, audio component 810, input/output (I/O) interface 812, sensor component 814, and communication component 816.
The processing component 802 generally controls overall operation of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations. The processing components 802 may include one or more processors 820 to execute instructions to perform all or a portion of the steps of the methods described above. Further, the processing component 802 can include one or more modules that facilitate interaction between the processing component 802 and other components. For example, the processing component 802 can include a multimedia module to facilitate interaction between the multimedia component 808 and the processing component 802.
The memory 804 is configured to store various types of data to support operations at the electronic device 800. Examples of such data include instructions for any application or method operating on the electronic device 800, contact data, phonebook data, messages, pictures, videos, and so forth. The memory 804 may be implemented by any type or combination of volatile or non-volatile memory devices such as Static Random Access Memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable programmable read-only memory (EPROM), programmable read-only memory (PROM), read-only memory (ROM), magnetic memory, flash memory, magnetic or optical disks.
The power supply component 806 provides power to the various components of the electronic device 800. The power components 806 may include a power management system, one or more power supplies, and other components associated with generating, managing, and distributing power for the electronic device 800.
The multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and a user. In some embodiments, the screen may include a Liquid Crystal Display (LCD) and a Touch Panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive an input signal from a user. The touch panel includes one or more touch sensors to sense touch, slide, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure associated with the touch or slide operation. In some embodiments, the multimedia component 808 includes a front facing camera and/or a rear facing camera. The front camera and/or the rear camera may receive external multimedia data when the electronic device 800 is in an operation mode, such as a shooting mode or a video mode. Each front camera and rear camera may be a fixed optical lens system or have a focal length and optical zoom capability.
The audio component 810 is configured to output and/or input audio signals. For example, the audio component 810 includes a Microphone (MIC) configured to receive external audio signals when the electronic device 800 is in an operational mode, such as a call mode, a recording mode, and a voice recognition mode. The received audio signals may further be stored in the memory 804 or transmitted via the communication component 816. In some embodiments, audio component 810 also includes a speaker for outputting audio signals.
The I/O interface 812 provides an interface between the processing component 802 and peripheral interface modules, which may be keyboards, click wheels, buttons, etc. These buttons may include, but are not limited to: a home button, a volume button, a start button, and a lock button.
The sensor assembly 814 includes one or more sensors for providing various aspects of state assessment for the electronic device 800. For example, the sensor assembly 814 may detect an open/closed state of the electronic device 800, the relative positioning of components, such as a display and keypad of the electronic device 800, the sensor assembly 814 may also detect a change in the position of the electronic device 800 or a component of the electronic device 800, the presence or absence of user contact with the electronic device 800, orientation or acceleration/deceleration of the electronic device 800, and a change in the temperature of the electronic device 800. Sensor assembly 814 may include a proximity sensor configured to detect the presence of a nearby object without any physical contact. The sensor assembly 814 may also include a light sensor, such as a Complementary Metal Oxide Semiconductor (CMOS) or Charge Coupled Device (CCD) image sensor, for use in imaging applications. In some embodiments, the sensor assembly 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor, or a temperature sensor.
The communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices. The electronic device 800 may access a wireless network based on a communication standard, such as a wireless network (WiFi), a second generation mobile communication technology (2G) or a third generation mobile communication technology (3G), or a combination thereof. In an exemplary embodiment, the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel. In an exemplary embodiment, the communication component 816 further includes a Near Field Communication (NFC) module to facilitate short-range communications. For example, the NFC module may be implemented based on Radio Frequency Identification (RFID) technology, infrared data association (IrDA) technology, Ultra Wideband (UWB) technology, Bluetooth (BT) technology, and other technologies.
In an exemplary embodiment, the electronic device 800 may be implemented by one or more Application Specific Integrated Circuits (ASICs), Digital Signal Processors (DSPs), Digital Signal Processing Devices (DSPDs), Programmable Logic Devices (PLDs), Field Programmable Gate Arrays (FPGAs), controllers, micro-controllers, microprocessors or other electronic components for performing the above-described methods.
In an exemplary embodiment, a non-transitory computer-readable storage medium, such as the memory 804, is also provided that includes computer program instructions executable by the processor 820 of the electronic device 800 to perform the above-described methods.
Fig. 6 illustrates a block diagram of an electronic device 1900 in accordance with an embodiment of the disclosure. For example, the electronic device 1900 may be provided as a server. Referring to fig. 6, electronic device 1900 includes a processing component 1922 further including one or more processors and memory resources, represented by memory 1932, for storing instructions, e.g., applications, executable by processing component 1922. The application programs stored in memory 1932 may include one or more modules that each correspond to a set of instructions. Further, the processing component 1922 is configured to execute instructions to perform the above-described method.
The electronic device 1900 may also include a power component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input/output (I/O) interface 1958. The electronic device 1900 may operate based on an operating system, such as the Microsoft Server operating system (Windows Server), stored in the memory 1932TM) Apple Inc. of the present application based on the graphic user interface operating System (Mac OS X)TM) Multi-user, multi-process computer operating system (Unix)TM) Free and open native code Unix-like operating System (Linux)TM) Open native code Unix-like operating System (FreeBSD)TM) Or the like.
In an exemplary embodiment, a non-transitory computer readable storage medium, such as the memory 1932, is also provided that includes computer program instructions executable by the processing component 1922 of the electronic device 1900 to perform the above-described methods.
The present disclosure may be systems, methods, and/or computer program products. The computer program product may include a computer-readable storage medium having computer-readable program instructions embodied thereon for causing a processor to implement various aspects of the present disclosure.
The computer readable storage medium may be a tangible device that can hold and store the instructions for use by the instruction execution device. The computer readable storage medium may be, for example, but not limited to, an electronic memory device, a magnetic memory device, an optical memory device, an electromagnetic memory device, a semiconductor memory device, or any suitable combination of the foregoing. More specific examples (a non-exhaustive list) of the computer readable storage medium would include the following: a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), a Static Random Access Memory (SRAM), a portable compact disc read-only memory (CD-ROM), a Digital Versatile Disc (DVD), a memory stick, a floppy disk, a mechanical coding device, such as punch cards or in-groove projection structures having instructions stored thereon, and any suitable combination of the foregoing. Computer-readable storage media as used herein is not to be construed as transitory signals per se, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through a waveguide or other transmission medium (e.g., optical pulses through a fiber optic cable), or electrical signals transmitted through electrical wires.
The computer-readable program instructions described herein may be downloaded from a computer-readable storage medium to a respective computing/processing device, or to an external computer or external storage device via a network, such as the internet, a local area network, a wide area network, and/or a wireless network. The network may include copper transmission cables, fiber optic transmission, wireless transmission, routers, firewalls, switches, gateway computers and/or edge servers. The network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network and forwards the computer-readable program instructions for storage in a computer-readable storage medium in the respective computing/processing device.
The computer program instructions for carrying out operations of the present disclosure may be assembler instructions, Instruction Set Architecture (ISA) instructions, machine-related instructions, microcode, firmware instructions, state setting data, or source or object code written in any combination of one or more programming languages, including an object oriented programming language such as Smalltalk, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The computer-readable program instructions may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider). In some embodiments, the electronic circuitry that can execute the computer-readable program instructions implements aspects of the present disclosure by utilizing the state information of the computer-readable program instructions to personalize the electronic circuitry, such as a programmable logic circuit, a Field Programmable Gate Array (FPGA), or a Programmable Logic Array (PLA).
Various aspects of the present disclosure are described herein with reference to flowchart illustrations and/or block diagrams of methods, apparatus (systems) and computer program products according to embodiments of the disclosure. It will be understood that each block of the flowchart illustrations and/or block diagrams, and combinations of blocks in the flowchart illustrations and/or block diagrams, can be implemented by computer-readable program instructions.
These computer-readable program instructions may be provided to a processor of a general purpose computer, special purpose computer, or other programmable data processing apparatus to produce a machine, such that the instructions, which execute via the processor of the computer or other programmable data processing apparatus, create means for implementing the functions/acts specified in the flowchart and/or block diagram block or blocks. These computer-readable program instructions may also be stored in a computer-readable storage medium that can direct a computer, programmable data processing apparatus, and/or other devices to function in a particular manner, such that the computer-readable medium storing the instructions comprises an article of manufacture including instructions which implement the function/act specified in the flowchart and/or block diagram block or blocks.
The computer readable program instructions may also be loaded onto a computer, other programmable data processing apparatus, or other devices to cause a series of operational steps to be performed on the computer, other programmable apparatus or other devices to produce a computer implemented process such that the instructions which execute on the computer, other programmable apparatus or other devices implement the functions/acts specified in the flowchart and/or block diagram block or blocks.
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of instructions, which comprises one or more executable instructions for implementing the specified logical function(s). In some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The computer program product may be embodied in hardware, software or a combination thereof. In an alternative embodiment, the computer program product is embodied in a computer storage medium, and in another alternative embodiment, the computer program product is embodied in a Software product, such as a Software Development Kit (SDK), or the like.
Having described embodiments of the present disclosure, the foregoing description is intended to be exemplary, not exhaustive, and not limited to the disclosed embodiments. Many modifications and variations will be apparent to those of ordinary skill in the art without departing from the scope and spirit of the described embodiments. The terminology used herein is chosen in order to best explain the principles of the embodiments, the practical application, or improvements made to the technology in the marketplace, or to enable others of ordinary skill in the art to understand the embodiments disclosed herein.

Claims (17)

1. A ride ticket evasion detection method is characterized by comprising the following steps:
acquiring a first identity characteristic obtained by identifying a first image of a target object;
acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired at different shooting angles aiming at a target object;
identifying the second image to obtain a second identity characteristic of the second image;
associating the first identity feature and the second identity feature;
and identifying the target object based on the associated second identity characteristic, and confirming that the target object does not run away when the identity of the target object is successfully identified.
2. The method of claim 1, further comprising: and identifying the target object based on the associated second identity characteristic, and confirming the target object to flee the ticket when the identity of the target object cannot be identified.
3. The method of claim 1, further comprising: confirming the target object is free from the ticket when the first identity feature cannot be identified from the first image.
4. The method of any of claims 1 to 3, further comprising:
receiving an infrared signal, wherein the infrared signal is triggered when the target object leaves an identification area of identity recognition;
and acquiring and storing pictures and/or videos of the target object passing through the identification area according to the receiving time of the infrared signal.
5. The method of claim 4, further comprising:
searching for an identity recognition record in a preset time period of the receiving time;
and confirming the target object ticket evasion under the condition that the target object identification record does not exist in the preset time period.
6. The method according to any of claims 1 to 5, wherein the occlusion condition comprises at least one of:
the first identity feature does not contain a face feature; the quality of the face corresponding to the first identity characteristic is smaller than a preset quality threshold; and the shielding area of the face corresponding to the first identity characteristic is larger than a preset area threshold value.
7. The method according to any one of claims 1 to 6, wherein the recognizing the second image to obtain the second identity characteristic of the second image comprises:
detecting the human face and the human body of the second image to obtain a detection result of at least one first object;
and screening the detection result of the at least one first object according to a preset screening condition to obtain a second identity characteristic of the target object.
8. The method of claim 7, wherein the screening conditions comprise at least one of:
the position of the first object is in a preset identification area;
the image area of the first object in the second image is maximum;
the first object is closest to a preset object.
9. The method according to claim 7, wherein the performing face and body detection on the second image to obtain a detection result of at least one first object comprises:
carrying out face detection and human body detection on the second image to obtain at least one face and at least one human body;
and associating at least one face and at least one human body of the second image to obtain a detection result of the at least one first object.
10. The method of any one of claims 1 to 9, wherein said associating said first identity characteristic and said second identity characteristic comprises:
matching the human features included in the first identity features with the human features included in the second identity features;
associating the first identity feature with the second identity feature if the physical characteristic of the first identity feature matches the physical characteristic of the second identity feature.
11. The method of claim 10, further comprising:
and under the condition that the human body features of the first identity features are not matched with the human body features of the second identity features, acquiring a second image again based on the acquisition time of the first image so as to correlate the first identity features with the second identity features of the second image, wherein the correlation times are preset or the human body features of the first identity features are matched with the human body features of the second identity features.
12. The method according to any one of claims 1 to 11, wherein the identifying the target object based on the associated second identity feature comprises:
and identifying the target object based on the face features included in the associated second identity features.
13. The method of any one of claims 1 to 12, further comprising:
and when the identity of the target object is successfully recognized, generating and storing an identity recognition record of the target object, wherein the identity recognition record comprises one or more of recognition time, user information and a recognition place.
14. The method of any one of claims 1 to 13, wherein the first image is acquired at a first position at a photographing angle towards the target object and the second image is acquired at a second position at a photographing angle towards the target object, wherein the second position is located above the first position.
15. A ride fare evasion detection device, comprising:
the first acquisition module is used for acquiring a first identity characteristic obtained by identifying a first image of a target object;
the second acquisition module is used for acquiring a second image under the condition that the first identity characteristic meets a preset shielding condition, wherein the first image and the second image are acquired by aiming at a target object at different shooting angles;
the identification module is used for identifying the second image to obtain a second identity characteristic of the second image;
an association module for associating the first identity characteristic with the second identity characteristic;
and the determining module is used for identifying the target object based on the associated second identity characteristic and confirming that the target object does not run away when the identity of the target object is successfully identified.
16. An electronic device, comprising:
a processor;
a memory for storing processor-executable instructions;
wherein the processor is configured to invoke the memory-stored instructions to perform the method of any one of claims 1 to 14.
17. A computer readable storage medium having computer program instructions stored thereon, which when executed by a processor implement the method of any one of claims 1 to 14.
CN202011529962.8A 2020-12-22 2020-12-22 Ride fare evasion detection method and device, electronic equipment and storage medium Pending CN112597886A (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN202011529962.8A CN112597886A (en) 2020-12-22 2020-12-22 Ride fare evasion detection method and device, electronic equipment and storage medium
PCT/CN2021/086701 WO2022134388A1 (en) 2020-12-22 2021-04-12 Method and device for rider fare evasion detection, electronic device, storage medium, and computer program product

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011529962.8A CN112597886A (en) 2020-12-22 2020-12-22 Ride fare evasion detection method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN112597886A true CN112597886A (en) 2021-04-02

Family

ID=75200747

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011529962.8A Pending CN112597886A (en) 2020-12-22 2020-12-22 Ride fare evasion detection method and device, electronic equipment and storage medium

Country Status (2)

Country Link
CN (1) CN112597886A (en)
WO (1) WO2022134388A1 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113269124A (en) * 2021-06-09 2021-08-17 重庆中科云从科技有限公司 Object identification method, system, equipment and computer readable medium
CN114613072A (en) * 2022-04-18 2022-06-10 宁波小遛共享信息科技有限公司 Vehicle returning control method and device for shared vehicle and electronic equipment
WO2022134388A1 (en) * 2020-12-22 2022-06-30 成都商汤科技有限公司 Method and device for rider fare evasion detection, electronic device, storage medium, and computer program product

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN117455442B (en) * 2023-12-25 2024-03-19 数据空间研究院 Statistical enhancement-based identity recognition method, system and storage medium

Citations (25)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN107016348A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 With reference to the method for detecting human face of depth information, detection means and electronic installation
CN107633558A (en) * 2017-09-12 2018-01-26 浙江网新电气技术有限公司 A kind of self-service ticket checking method and equipment based on portrait Yu identity card matching identification
CN107886667A (en) * 2017-10-11 2018-04-06 深圳云天励飞技术有限公司 Alarm method and device
CN107945321A (en) * 2017-11-08 2018-04-20 平安科技(深圳)有限公司 Safety inspection method, application server and computer-readable recording medium based on recognition of face
CN107992797A (en) * 2017-11-02 2018-05-04 中控智慧科技股份有限公司 Face identification method and relevant apparatus
CN108399665A (en) * 2018-01-03 2018-08-14 平安科技(深圳)有限公司 Method for safety monitoring, device based on recognition of face and storage medium
CN108805071A (en) * 2018-06-06 2018-11-13 北京京东金融科技控股有限公司 Identity verification method and device, electronic equipment, storage medium
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
WO2019033572A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Method for detecting whether face is blocked, device and storage medium
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
CN109726656A (en) * 2018-12-18 2019-05-07 广东中安金狮科创有限公司 Monitoring device and its trailing monitoring method, device, readable storage medium storing program for executing
CN109766755A (en) * 2018-12-06 2019-05-17 深圳市天彦通信股份有限公司 Face identification method and Related product
CN110263830A (en) * 2019-06-06 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110348301A (en) * 2019-06-04 2019-10-18 平安科技(深圳)有限公司 Implementation method of checking tickets, device, computer equipment and storage medium based on video
WO2019200902A1 (en) * 2018-04-19 2019-10-24 广州视源电子科技股份有限公司 Image recognition method and device
CN110458062A (en) * 2019-07-30 2019-11-15 深圳市商汤科技有限公司 Face identification method and device, electronic equipment and storage medium
CN110781821A (en) * 2019-10-25 2020-02-11 上海商汤智能科技有限公司 Target detection method and device based on unmanned aerial vehicle, electronic equipment and storage medium
CN111161205A (en) * 2018-10-19 2020-05-15 阿里巴巴集团控股有限公司 Image processing and face image recognition method, device and equipment
WO2020134858A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Facial attribute recognition method and apparatus, electronic device, and storage medium
CN111460413A (en) * 2019-01-18 2020-07-28 阿里巴巴集团控股有限公司 Identity recognition system, method and device, electronic equipment and storage medium
CN111768543A (en) * 2020-06-29 2020-10-13 杭州翔毅科技有限公司 Traffic management method, device, storage medium and device based on face recognition
CN111768542A (en) * 2020-06-28 2020-10-13 浙江大华技术股份有限公司 Gate control system, method and device, server and storage medium
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN111967311A (en) * 2020-07-06 2020-11-20 广东技术师范大学 Emotion recognition method and device, computer equipment and storage medium

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112597886A (en) * 2020-12-22 2021-04-02 成都商汤科技有限公司 Ride fare evasion detection method and device, electronic equipment and storage medium

Patent Citations (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106503687A (en) * 2016-11-09 2017-03-15 合肥工业大学 The monitor video system for identifying figures of fusion face multi-angle feature and its method
CN107016348A (en) * 2017-03-09 2017-08-04 广东欧珀移动通信有限公司 With reference to the method for detecting human face of depth information, detection means and electronic installation
WO2019033572A1 (en) * 2017-08-17 2019-02-21 平安科技(深圳)有限公司 Method for detecting whether face is blocked, device and storage medium
CN107633558A (en) * 2017-09-12 2018-01-26 浙江网新电气技术有限公司 A kind of self-service ticket checking method and equipment based on portrait Yu identity card matching identification
CN107886667A (en) * 2017-10-11 2018-04-06 深圳云天励飞技术有限公司 Alarm method and device
CN107992797A (en) * 2017-11-02 2018-05-04 中控智慧科技股份有限公司 Face identification method and relevant apparatus
CN107945321A (en) * 2017-11-08 2018-04-20 平安科技(深圳)有限公司 Safety inspection method, application server and computer-readable recording medium based on recognition of face
CN108399665A (en) * 2018-01-03 2018-08-14 平安科技(深圳)有限公司 Method for safety monitoring, device based on recognition of face and storage medium
WO2019134246A1 (en) * 2018-01-03 2019-07-11 平安科技(深圳)有限公司 Facial recognition-based security monitoring method, device, and storage medium
WO2019200902A1 (en) * 2018-04-19 2019-10-24 广州视源电子科技股份有限公司 Image recognition method and device
CN108805071A (en) * 2018-06-06 2018-11-13 北京京东金融科技控股有限公司 Identity verification method and device, electronic equipment, storage medium
CN109117803A (en) * 2018-08-21 2019-01-01 腾讯科技(深圳)有限公司 Clustering method, device, server and the storage medium of facial image
CN111161205A (en) * 2018-10-19 2020-05-15 阿里巴巴集团控股有限公司 Image processing and face image recognition method, device and equipment
CN109766755A (en) * 2018-12-06 2019-05-17 深圳市天彦通信股份有限公司 Face identification method and Related product
CN109726656A (en) * 2018-12-18 2019-05-07 广东中安金狮科创有限公司 Monitoring device and its trailing monitoring method, device, readable storage medium storing program for executing
CN109658572A (en) * 2018-12-21 2019-04-19 上海商汤智能科技有限公司 Image processing method and device, electronic equipment and storage medium
WO2020134858A1 (en) * 2018-12-29 2020-07-02 北京市商汤科技开发有限公司 Facial attribute recognition method and apparatus, electronic device, and storage medium
CN111460413A (en) * 2019-01-18 2020-07-28 阿里巴巴集团控股有限公司 Identity recognition system, method and device, electronic equipment and storage medium
CN110348301A (en) * 2019-06-04 2019-10-18 平安科技(深圳)有限公司 Implementation method of checking tickets, device, computer equipment and storage medium based on video
CN110263830A (en) * 2019-06-06 2019-09-20 北京旷视科技有限公司 Image processing method, device and system and storage medium
CN110458062A (en) * 2019-07-30 2019-11-15 深圳市商汤科技有限公司 Face identification method and device, electronic equipment and storage medium
CN110781821A (en) * 2019-10-25 2020-02-11 上海商汤智能科技有限公司 Target detection method and device based on unmanned aerial vehicle, electronic equipment and storage medium
CN111768542A (en) * 2020-06-28 2020-10-13 浙江大华技术股份有限公司 Gate control system, method and device, server and storage medium
CN111768543A (en) * 2020-06-29 2020-10-13 杭州翔毅科技有限公司 Traffic management method, device, storage medium and device based on face recognition
CN111815675A (en) * 2020-06-30 2020-10-23 北京市商汤科技开发有限公司 Target object tracking method and device, electronic equipment and storage medium
CN111967311A (en) * 2020-07-06 2020-11-20 广东技术师范大学 Emotion recognition method and device, computer equipment and storage medium

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2022134388A1 (en) * 2020-12-22 2022-06-30 成都商汤科技有限公司 Method and device for rider fare evasion detection, electronic device, storage medium, and computer program product
CN113269124A (en) * 2021-06-09 2021-08-17 重庆中科云从科技有限公司 Object identification method, system, equipment and computer readable medium
CN113269124B (en) * 2021-06-09 2023-05-09 重庆中科云从科技有限公司 Object recognition method, system, equipment and computer readable medium
CN114613072A (en) * 2022-04-18 2022-06-10 宁波小遛共享信息科技有限公司 Vehicle returning control method and device for shared vehicle and electronic equipment

Also Published As

Publication number Publication date
WO2022134388A1 (en) 2022-06-30

Similar Documents

Publication Publication Date Title
US11321575B2 (en) Method, apparatus and system for liveness detection, electronic device, and storage medium
US11410001B2 (en) Method and apparatus for object authentication using images, electronic device, and storage medium
CN108197586B (en) Face recognition method and device
CN112597886A (en) Ride fare evasion detection method and device, electronic equipment and storage medium
CN110287671B (en) Verification method and device, electronic equipment and storage medium
US20210166040A1 (en) Method and system for detecting companions, electronic device and storage medium
CN109934275B (en) Image processing method and device, electronic equipment and storage medium
CN110942036B (en) Person identification method and device, electronic equipment and storage medium
CN110675539B (en) Identity verification method and device, electronic equipment and storage medium
CN110532957B (en) Face recognition method and device, electronic equipment and storage medium
CN110990801B (en) Information verification method and device, electronic equipment and storage medium
JP2021517747A (en) Image processing methods and devices, electronic devices and storage media
CN112669583A (en) Alarm threshold value adjusting method and device, electronic equipment and storage medium
TWI766458B (en) Information identification method and apparatus, electronic device, and storage medium
CN112270288A (en) Living body identification method, access control device control method, living body identification device, access control device and electronic device
CN112837454A (en) Passage detection method and device, electronic equipment and storage medium
CN111435422B (en) Action recognition method, control method and device, electronic equipment and storage medium
CN109101542B (en) Image recognition result output method and device, electronic device and storage medium
CN112926510A (en) Abnormal driving behavior recognition method and device, electronic equipment and storage medium
CN109344703B (en) Object detection method and device, electronic equipment and storage medium
CN107977636B (en) Face detection method and device, terminal and storage medium
CN110781842A (en) Image processing method and device, electronic equipment and storage medium
CN110929545A (en) Human face image sorting method and device
CN107133551B (en) Fingerprint verification method and device
CN110826045B (en) Authentication method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
REG Reference to a national code

Ref country code: HK

Ref legal event code: DE

Ref document number: 40041141

Country of ref document: HK