CN114550088B - Multi-camera fused passenger identification method and system and electronic equipment - Google Patents
Multi-camera fused passenger identification method and system and electronic equipment Download PDFInfo
- Publication number
- CN114550088B CN114550088B CN202210160362.1A CN202210160362A CN114550088B CN 114550088 B CN114550088 B CN 114550088B CN 202210160362 A CN202210160362 A CN 202210160362A CN 114550088 B CN114550088 B CN 114550088B
- Authority
- CN
- China
- Prior art keywords
- passenger
- identification
- weight
- pedestrian
- state
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/25—Fusion techniques
Landscapes
- Engineering & Computer Science (AREA)
- Data Mining & Analysis (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Biology (AREA)
- Evolutionary Computation (AREA)
- Physics & Mathematics (AREA)
- General Engineering & Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Life Sciences & Earth Sciences (AREA)
- Image Processing (AREA)
- Image Analysis (AREA)
Abstract
The invention provides a passenger identification method, a passenger identification system and an electronic device with multi-camera fusion, wherein the passenger identification method comprises the following steps: identifying passenger behaviors in the passenger human body image to obtain state information of each passenger; inputting the human body images of the passengers into a face recognition model to obtain the recognition score of each passenger; identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result; determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information; and obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result. According to the invention, the subway region images acquired by the multiple cameras are fused by utilizing the state information, the identification score, the identification weight and the historical weight of the passenger to obtain the passenger identification result, so that the passenger information acquired from multiple angles can be fused, and the passenger identification precision is greatly improved.
Description
Technical Field
The invention belongs to the technical field of urban rail transit, and particularly relates to a passenger identification method and system with multi-camera fusion and an electronic device.
Background
Subway is one of the traffic ways commonly used in modern times, and the use frequency is higher and higher in recent years. Compared with the public transport, the subway solves the problem of congestion and greatly shortens the travel time. However, the following problems occur, and because there are too many people who take the subway, the subway security check queuing also becomes a problem, and the existing security check method is that subway staff usually check pedestrians one by one to check whether passengers can take the subway, but such a mode affects the travel time of the passengers.
Disclosure of Invention
The invention aims to provide a multi-camera fusion passenger identification method, a multi-camera fusion passenger identification system and electronic equipment, and aims to solve the problem of low efficiency of manual pedestrian troubleshooting.
In order to achieve the purpose, the invention adopts the technical scheme that:
a multi-camera fused passenger identification method comprises the following steps:
step 1: acquiring a passenger human body image acquired in a core identification area;
and 2, step: identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
and step 3: inputting the human body images of the passengers into a face recognition model to obtain the recognition score of each passenger;
and 4, step 4: identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result;
and 5: acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area, and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
and 6: and obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
Preferably, the step 4: identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result, wherein the method comprises the following steps:
step 4.1: judging whether the passenger lowers the head or wears a hat according to the state information;
and 4.2: if the state of the passenger is head-down or wearing a hat, setting the value range of the state weight as (0, 1);
step 4.3: if the state of the passenger is not head-down or wearing a hat, setting the value of the state weight as 1;
step 4.4: and carrying out weighted summation on the identification scores by using the state weights to obtain an initial identification result of the corresponding passenger.
Preferably, the step 5: acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area, and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information, wherein the identification weight comprises the following steps:
step 5.1: judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
step 5.2: if the corresponding passenger is found in the pedestrian image within the preset time range, setting the value of the identification weight as (1, W) max ),W max Is the maximum value of the identification weight;
step 5.3: if no corresponding passenger is found in the pedestrian image within a preset time range, setting the value of the identification weight as 1;
step 5.4: judging whether corresponding passengers in the pedestrian historical information have long-term arrival records within a preset time period;
step 5.5: if the corresponding passenger is found to have a long-term arrival record in the historical pedestrian information within a preset time period, setting the historical weight value to be (1, H) max ),H max Is the maximum value of the history weight;
step 5.6: and if no long-term arrival record of the corresponding passenger in the preset time period is found in the historical information of the passenger, setting the historical weight value to be 1.
Preferably, the step 6: obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result, wherein the fused passenger identification result comprises the following steps:
the formula is adopted:
obtaining a fused passenger identification result; wherein w history Representing a historical weight, w pre Representing the recognition weight, w k Represents the state weight, S k Representing the recognition score.
The invention also provides a passenger identification system with the fusion of multiple cameras, which comprises the following components:
the passenger image acquisition module is used for acquiring a passenger human body image acquired in the core identification area;
the passenger behavior identification module is used for identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
the identification score determining module is used for inputting the human body images of the passengers into a face identification model to obtain the identification score of each passenger;
the initial identification result determining module is used for identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result;
the weight determining module is used for acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
and the passenger identification fusion module is used for obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
Preferably, the initial recognition result determining module includes:
the state information judging unit is used for judging whether the passenger lowers the head or wears a hat according to the state information;
a first state weight determination unit for setting a value range of the state weight to (0, 1) when the state of the passenger is a low head or wearing a hat;
a second state weight determination unit for setting the value of the state weight to 1 when the state of the passenger is not head-down or hat-on;
and the weighted summation unit is used for carrying out weighted summation on the identification scores by utilizing the state weights to obtain an initial identification result of the corresponding passenger.
Preferably, the weight determining module includes:
the pedestrian image judging unit is used for judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
a first recognition weight determination unit for setting a value of the recognition weight to (1, W) when a corresponding passenger is found within a preset time range in the pedestrian image max ),W max Is the maximum value of the identification weight;
a first identification weight determination unit configured to set a value of an identification weight to 1 when a corresponding passenger is not found within a preset time range in the pedestrian image;
the history information judging unit is used for judging whether corresponding passengers in the pedestrian history information have long-term arrival records in a preset time period;
a first history weight determination unit for setting the history weight value to (1, H) when the corresponding passenger is found in the pedestrian history information to have a long-term entry record within a preset period of time max ),H max Is the maximum value of the historical weight;
and the second history weight determining unit is used for setting the history weight value as 1 when the corresponding passenger does not have a long-term arrival record in the preset time period in the pedestrian history information.
Preferably, the passenger identification fusion module includes:
a passenger identification fusion unit for employing the formula:
obtaining a fused passenger identification result; wherein w history Representing historical weight, w pre The weight of the recognition is represented by,w k represents the state weight, S k Representing the recognition score.
The invention also provides an electronic device, which comprises a bus, a transceiver, a memory, a processor and a computer program stored on the memory and capable of running on the processor, wherein the transceiver, the memory and the processor are connected through the bus, and the computer program realizes the steps in the multi-camera fused passenger identification method when being executed by the processor.
The invention also provides a computer-readable storage medium, on which a computer program is stored, which is characterized in that the computer program, when being executed by a processor, carries out the steps of a multi-camera fused passenger identification method as described above.
The passenger identification method, the passenger identification system and the electronic equipment with the multi-camera integration provided by the invention have the beneficial effects that: compared with the prior art, the passenger identification method with the fusion of the multiple cameras comprises the following steps: identifying passenger behaviors in the passenger human body image to obtain state information of each passenger; inputting the human body images of the passengers into a face recognition model to obtain the recognition score of each passenger; identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result; determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information; and obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result. According to the invention, the subway region images acquired by the multiple cameras are fused by utilizing the state information, the identification score, the identification weight and the historical weight of the passenger to obtain the passenger identification result, so that the passenger information acquired from multiple angles can be fused, and the passenger identification precision is greatly improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present invention, the drawings needed for the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without creative efforts.
FIG. 1 is a flow chart of a multi-camera fused passenger identification method provided by the present invention;
FIG. 2 is a schematic diagram of a multi-camera fused passenger identification method according to the present invention;
FIG. 3 is a schematic diagram of passenger status recognition provided by the present invention;
fig. 4 is a schematic diagram of multi-camera fusion calculation provided by the present invention.
Detailed Description
In order to make the technical problems, technical solutions and advantageous effects to be solved by the present invention more clearly apparent, the present invention is further described in detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and do not limit the invention.
The invention aims to provide a multi-camera fusion passenger identification method, a multi-camera fusion passenger identification system and electronic equipment, and aims to solve the problem of low efficiency of manual pedestrian troubleshooting.
Referring to fig. 1-4, a method for identifying a passenger with a multi-camera integration includes the following steps:
s1: acquiring passenger human body images acquired in a core identification area;
in the invention, before S1, a subway region needs to be divided into a non-core identification region and a core identification region, and a camera is installed in the non-core identification region and the core identification region. Aiming at the conditions that passengers look at mobile phones while leaning down and wear peaked caps in a subway scene, the cameras possibly cannot accurately capture the faces of the passengers, and in order to further improve the recall rate, the installation position of the monitoring camera needs to be expanded from the top (more than 2 meters above the ground) to the wall (1-2 meters above the ground), so that the effective monitoring range of the camera is enlarged. In addition, in the non-core identification area, the identified personnel are not necessarily in-station (out-station) or may be in a subway passage, so the identification result of the camera in the area is only recorded (prior information) and is not directly used for identification. In this embodiment, the non-core identification area may be a location such as a station entrance. The core identification area can be a subway waiting area and other positions.
S2: identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
in the invention, the passenger human body image is required to be input into the deep learning network to construct a passenger behavior recognition model. By using the passenger behavior recognition model, it is possible to determine whether or not the passenger wears a hat and whether or not the passenger is in a state of lowering his head in the pitch direction (heading direction).
S3: inputting the human body images of the passengers into a face recognition model to obtain the recognition score of each passenger;
further, the method converts the face image into a high-dimensional feature vector of the face through face feature extraction; and then, performing distance matching on the obtained vector to be processed and a preset template characteristic vector in a template library, and calculating to obtain a corresponding identification score.
S4: identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result;
in an embodiment of the present invention, S4 includes:
s4.1: judging whether the passenger lowers the head or wears a hat according to the state information;
s4.2: if the state of the passenger is head-down or wearing a hat, setting the value range of the state weight as (0, 1);
s4.3: if the state of the passenger is not head-down or wearing a hat, setting the value of the state weight as 1;
s4.4: and carrying out weighted summation on the identification scores by using the state weights to obtain an initial identification result of the corresponding passenger.
According to the invention, the state of each passenger ID is obtained by utilizing the image acquired by each camera, whether the passenger lowers the head or wears a hat is judged based on the state of the passenger ID, and the identification score of the passenger is punished when the passenger lowers the head or wears the hat, so that the subsequent identification precision of the corresponding passenger ID can be improved. It should be noted that the invention can adjust the corresponding state weight according to the difference of the height of the camera, and is suitable for the above value when the camera is 1-2 meters away from the ground.
S5: acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area, and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
in the present invention, S5 includes:
s5.1: judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
s5.2: if the corresponding passenger is found in the pedestrian image within the preset time range, setting the value of the identification weight as (1,W) max ),W max Is the maximum value of the identification weight;
s5.3: if no corresponding passenger is found in the pedestrian image within the preset time range, setting the value of the identification weight as 1;
s5.4: judging whether corresponding passengers have long-term arrival records in the pedestrian historical information within a preset time period;
s5.5: if the corresponding passenger is found to have a long-term arrival record in the historical pedestrian information within a preset time period, setting the historical weight value to be (1, H) max ),H max Is the maximum value of the historical weight;
s5.6: and if the historical pedestrian information shows that the corresponding passenger does not have a long-term arrival record within the preset time period, setting the historical weight value as 1.
In the invention, if passenger ID appears in a non-core camera area within a period of time, namely the prior information condition is met, the corresponding weight is set to a value larger than 1, so that the probability of the passenger entering the station for identification can be increased; if the pedestrian history information supports that the ID has a long-term arrival record within a certain period of the station, the corresponding weight is set to a value larger than 1 so as to increase the probability of the passenger arrival identification. According to the invention, the preliminary identification result is further optimized by utilizing the prior information and the historical information, all information of the subway region can be fully utilized, and the accuracy and the recall rate of passenger identification are further improved.
S6: and obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
Further, S6 includes:
the formula is adopted:
obtaining a fused passenger identification result; wherein w history Representing historical weight, w pre Representing the recognition weight, w k Represents the state weight, S k Representing the recognition score.
The invention discloses a passenger identification method with multi-camera fusion, which comprises the following steps: inputting video stream information of a plurality of cameras; analyzing the passenger behavior in each camera and judging the state of the passenger behavior; and obtaining a multi-camera fused recognition result according to the passenger state, the prior information and the historical information. According to the invention, the subway region images acquired by the multiple cameras are fused by utilizing the state information, the identification score, the identification weight and the historical weight of the passenger to obtain the passenger identification result, so that the passenger information acquired from multiple angles can be fused, and the passenger identification precision is greatly improved.
The invention also provides a passenger identification system with the fusion of multiple cameras, which comprises the following components:
the passenger image acquisition module is used for acquiring a passenger human body image acquired in the core identification area;
the passenger behavior identification module is used for identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
the identification score determining module is used for inputting the human body images of the passengers into the face identification model to obtain the identification score of each passenger;
the initial identification result determining module is used for identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result;
the weight determining module is used for acquiring a pedestrian image and pedestrian history information acquired in the non-core identification area and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
and the passenger identification fusion module is used for obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
Preferably, the initial recognition result determining module includes:
the state information judging unit is used for judging whether the passenger lowers the head or wears a hat according to the state information;
a first state weight determination unit for setting a value range of the state weight to (0, 1) when the state of the passenger is a low head or wearing a hat;
a second state weight determination unit for setting the value of the state weight to 1 when the state of the passenger is not head-down or hat-on;
and the weighted summation unit is used for carrying out weighted summation on the identification scores by using the state weights to obtain an initial identification result of the corresponding passenger.
Preferably, the weight determining module includes:
the pedestrian image judging unit is used for judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
a first recognition weight determination unit for setting a value of the recognition weight to (1, W) when a corresponding passenger is found within a preset time range in the pedestrian image max ),W max Is the maximum value of the identification weight;
the first identification weight determining unit is used for setting the value of the identification weight as 1 when no corresponding passenger is found in the pedestrian image within the preset time range;
the history information judging unit is used for judging whether corresponding passengers in the pedestrian history information have long-term arrival records in a preset time period;
a first history weight determination unit for setting the history weight value to (1, H) when the corresponding passenger is found in the pedestrian history information to have a long-term entry record within a preset period of time max ),H max Is the maximum value of the historical weight;
and the second history weight determining unit is used for setting the history weight value to be 1 when the corresponding passenger does not have a long-term arrival record in the preset time period in the pedestrian history information.
Preferably, the passenger identification fusion module includes:
a passenger identification fusion unit for employing the formula:
obtaining a fused passenger identification result; wherein w history Representing historical weight, w pre Representing the recognition weight, w k Represents the state weight, S k Representing the recognition score.
According to the invention, the subway region images acquired by the multiple cameras are fused by utilizing the state information, the identification score, the identification weight and the historical weight of the passenger to obtain the passenger identification result, so that the passenger information acquired from multiple angles can be fused, and the passenger identification precision is greatly improved.
The invention also provides an electronic device, which comprises a bus, a transceiver, a memory, a processor and a computer program which is stored on the memory and can run on the processor, wherein the transceiver, the memory and the processor are connected through the bus, and when the computer program is executed by the processor, each process of the embodiment of the multi-camera fused passenger identification method is realized, the same technical effect can be achieved, and the repeated description is omitted for avoiding the repetition.
The invention also provides a computer-readable storage medium, on which a computer program is stored, wherein the computer program is executed by a processor to implement the steps in the above-mentioned multi-camera fused passenger identification method, and the computer program is executed by the processor to implement the processes of the above-mentioned multi-camera fused passenger identification method embodiment, and can achieve the same technical effects, and in order to avoid repetition, the description is omitted here.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.
Claims (8)
1. A passenger identification method with multi-camera fusion is characterized by comprising the following steps:
step 1: acquiring a passenger human body image acquired in a core identification area;
and 2, step: identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
and step 3: inputting the human body images of the passengers into a face recognition model to obtain the recognition score of each passenger;
and 4, step 4: identifying corresponding passengers according to the state information and the identification scores to obtain initial identification results;
and 5: acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area, and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
the step 5: acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area, and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information, wherein the identification weight comprises the following steps:
step 5.1: judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
and step 5.2: if the corresponding passenger is found in the pedestrian image within the preset time range, setting the value of the identification weight as (1,W) max ),W max Is the maximum value of the identification weight;
step 5.3: if no corresponding passenger is found in the pedestrian image within a preset time range, setting the value of the identification weight as 1;
step 5.4: judging whether corresponding passengers have long-term arrival records in the pedestrian historical information within a preset time period;
step 5.5: if the corresponding passenger is found to have a long-term arrival record in the historical pedestrian information within a preset time period, setting the historical weight value to be (1, H) max ),H max Is the maximum value of the historical weight;
step 5.6: if the fact that the corresponding passenger does not have a long-term arrival record in the preset time period is found in the pedestrian history information, setting a history weight value as 1;
and 6: and obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
2. The multi-camera fused passenger identification method according to claim 1, wherein said step 4: identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result, wherein the method comprises the following steps:
step 4.1: judging whether the passenger lowers the head or wears a hat according to the state information;
step 4.2: if the state of the passenger is head-down or wearing a hat, setting the value range of the state weight as (0, 1);
step 4.3: if the state of the passenger is not head-down or wearing a hat, setting the value of the state weight as 1;
step 4.4: and carrying out weighted summation on the identification scores by using the state weights to obtain an initial identification result of the corresponding passenger.
3. The multi-camera fused passenger identification method of claim 2, wherein said step 6: obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result, wherein the fused passenger identification result comprises the following steps:
the formula is adopted:
obtaining a fused passenger identification result; wherein, w history Representing a historical weight, w pre Representing the recognition weight, w k Represents the state weight, S k Representing the recognition score.
4. A multi-camera fused passenger identification system, comprising:
the passenger image acquisition module is used for acquiring a passenger human body image acquired in the core identification area;
the passenger behavior identification module is used for identifying passenger behaviors in the passenger human body image to obtain state information of each passenger;
the identification score determining module is used for inputting the human body images of the passengers into a face identification model to obtain the identification score of each passenger;
the initial identification result determining module is used for identifying the corresponding passenger according to the state information and the identification score to obtain an initial identification result;
the weight determining module is used for acquiring a pedestrian image and pedestrian history information acquired in a non-core identification area and determining an identification weight and a history weight according to the pedestrian image and the pedestrian history information;
the weight determination module comprises:
the pedestrian image judging unit is used for judging whether a corresponding passenger appears in the pedestrian image within a preset time range;
a first recognition weight determination unit for setting a value of a recognition weight to (1, W) when a corresponding passenger is found within a preset time range in the pedestrian image max ),W max Is the maximum value of the identification weight;
a first recognition weight determination unit configured to set a value of a recognition weight to 1 when no corresponding passenger is found within a preset time range in the pedestrian image;
the history information judging unit is used for judging whether corresponding passengers in the pedestrian history information have long-term arrival records in a preset time period;
a first history weight determination unit for setting a history weight value to (1,H) when the corresponding passenger is found to have a long-term arrival record within a preset time period in the pedestrian history information max ),H max Is the maximum value of the history weight;
the second history weight determining unit is used for setting the value of the history weight to be 1 when the fact that no long-term arrival record of the corresponding passenger exists in the pedestrian history information within the preset time period is found;
and the passenger identification fusion module is used for obtaining a fused passenger identification result according to the identification weight, the historical weight and the initial identification result.
5. The multi-camera fused passenger identification system of claim 4, wherein the initial identification result determining module comprises:
the state information judging unit is used for judging whether the passenger lowers the head or wears a hat according to the state information;
a first state weight determination unit for setting a value range of the state weight to (0, 1) when the state of the passenger is a low head or wearing a hat;
a second state weight determination unit for setting the value of the state weight to 1 when the state of the passenger is not the head-down state or the hat-on state;
and the weighted summation unit is used for carrying out weighted summation on the identification scores by utilizing the state weights to obtain an initial identification result of the corresponding passenger.
6. The multi-camera fused passenger identification system of claim 5, wherein the passenger identification fusion module comprises:
a passenger identification fusion unit for employing the formula:
obtaining a fused passenger identification result; wherein, w history Representing historical weight, w pre Representing the recognition weight, w k Represents the state weight, S k Representing the recognition score.
7. An electronic device comprising a bus, a transceiver, a memory, a processor and a computer program stored on the memory and executable on the processor, the transceiver, the memory and the processor being connected via the bus, characterized in that the computer program, when executed by the processor, implements the steps in a multi-camera fused passenger identification method according to any of claims 1 to 3.
8. A computer-readable storage medium, on which a computer program is stored which, when being executed by a processor, carries out the steps of a multi-camera fused passenger identification method according to any one of claims 1 to 3.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210160362.1A CN114550088B (en) | 2022-02-22 | 2022-02-22 | Multi-camera fused passenger identification method and system and electronic equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202210160362.1A CN114550088B (en) | 2022-02-22 | 2022-02-22 | Multi-camera fused passenger identification method and system and electronic equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN114550088A CN114550088A (en) | 2022-05-27 |
CN114550088B true CN114550088B (en) | 2022-12-13 |
Family
ID=81677753
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202210160362.1A Active CN114550088B (en) | 2022-02-22 | 2022-02-22 | Multi-camera fused passenger identification method and system and electronic equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN114550088B (en) |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886284A (en) * | 2014-03-03 | 2014-06-25 | 小米科技有限责任公司 | Character attribute information identification method and device and electronic device |
CN112116811A (en) * | 2020-09-23 | 2020-12-22 | 佳都新太科技股份有限公司 | Method and device for identifying and determining riding path |
CN112562105A (en) * | 2019-09-06 | 2021-03-26 | 北京国双科技有限公司 | Security check method and device, storage medium and electronic equipment |
CN112990518A (en) * | 2019-12-12 | 2021-06-18 | 深圳先进技术研究院 | Real-time prediction method and device for destination station of individual subway passenger |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN105956518A (en) * | 2016-04-21 | 2016-09-21 | 腾讯科技(深圳)有限公司 | Face identification method, device and system |
CN107832730A (en) * | 2017-11-23 | 2018-03-23 | 高域(北京)智能科技研究院有限公司 | Improve the method and face identification system of face recognition accuracy rate |
CN110263658A (en) * | 2019-05-25 | 2019-09-20 | 周建萍 | A kind of subway charge platform and its method based on face recognition |
CN112001932B (en) * | 2020-09-01 | 2023-10-31 | 腾讯科技(深圳)有限公司 | Face recognition method, device, computer equipment and storage medium |
CN113887427A (en) * | 2021-09-30 | 2022-01-04 | 联想(北京)有限公司 | Face recognition method and device |
CN113788050B (en) * | 2021-10-12 | 2022-09-23 | 北京城建设计发展集团股份有限公司 | Rail transit driving command system and two-dimensional data presentation method |
CN113920568A (en) * | 2021-11-02 | 2022-01-11 | 中电万维信息技术有限责任公司 | Face and human body posture emotion recognition method based on video image |
-
2022
- 2022-02-22 CN CN202210160362.1A patent/CN114550088B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN103886284A (en) * | 2014-03-03 | 2014-06-25 | 小米科技有限责任公司 | Character attribute information identification method and device and electronic device |
CN112562105A (en) * | 2019-09-06 | 2021-03-26 | 北京国双科技有限公司 | Security check method and device, storage medium and electronic equipment |
CN112990518A (en) * | 2019-12-12 | 2021-06-18 | 深圳先进技术研究院 | Real-time prediction method and device for destination station of individual subway passenger |
CN112116811A (en) * | 2020-09-23 | 2020-12-22 | 佳都新太科技股份有限公司 | Method and device for identifying and determining riding path |
Also Published As
Publication number | Publication date |
---|---|
CN114550088A (en) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP6570731B2 (en) | Method and system for calculating passenger congestion | |
CN110235083B (en) | Unsupervised learning of object recognition methods and systems | |
CN106541968B (en) | The recognition methods of the subway carriage real-time prompt system of view-based access control model analysis | |
CN105844229B (en) | A kind of calculation method and its system of passenger's crowding | |
CN103366506A (en) | Device and method for automatically monitoring telephone call behavior of driver when driving | |
CN106203513A (en) | A kind of based on pedestrian's head and shoulder multi-target detection and the statistical method of tracking | |
CN114023062B (en) | Traffic flow information monitoring method based on deep learning and edge calculation | |
CN112287827A (en) | Complex environment pedestrian mask wearing detection method and system based on intelligent lamp pole | |
CN112084928A (en) | Road traffic accident detection method based on visual attention mechanism and ConvLSTM network | |
Chang et al. | Video analytics in smart transportation for the AIC'18 challenge | |
CN110633671A (en) | Bus passenger flow real-time statistical method based on depth image | |
CN108334831A (en) | A kind of monitoring image processing method, monitoring terminal and system | |
CN111339811B (en) | Image processing method, device, equipment and storage medium | |
US11557133B1 (en) | Automatic license plate recognition | |
CN112349016A (en) | Intelligent access control system based on big data analysis | |
CN115035564A (en) | Face recognition method, system and related components based on intelligent patrol car camera | |
CN114550088B (en) | Multi-camera fused passenger identification method and system and electronic equipment | |
CN113343960A (en) | Method for estimating and early warning passenger flow retained in subway station in real time | |
CN113591643A (en) | Underground vehicle station entering and exiting detection system and method based on computer vision | |
CN110517251B (en) | Scenic spot area overload detection and early warning system and method | |
CN112258707A (en) | Intelligent access control system based on face recognition | |
CN116681722A (en) | Traffic accident detection method based on isolated forest algorithm and target tracking | |
CN116311166A (en) | Traffic obstacle recognition method and device and electronic equipment | |
CN112989883B (en) | Method for identifying obstacle in front of train | |
CN210515650U (en) | Intelligent environment-friendly electronic snapshot system for black smoke vehicle on road |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |