WO2017156772A1 - 一种乘客拥挤度的计算方法及其系统 - Google Patents
一种乘客拥挤度的计算方法及其系统 Download PDFInfo
- Publication number
- WO2017156772A1 WO2017156772A1 PCT/CN2016/076740 CN2016076740W WO2017156772A1 WO 2017156772 A1 WO2017156772 A1 WO 2017156772A1 CN 2016076740 W CN2016076740 W CN 2016076740W WO 2017156772 A1 WO2017156772 A1 WO 2017156772A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- vehicle
- passengers
- passenger
- getting
- human head
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/44—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
- G06V10/443—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
- G06V10/446—Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering using Haar-like filters, e.g. using integral image techniques
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
- G06F18/2148—Generating training patterns; Bootstrap methods, e.g. bagging or boosting characterised by the process organisation or structure, e.g. boosting cascade
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/215—Motion-based segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/774—Generating sets of training patterns; Bootstrap methods, e.g. bagging or boosting
- G06V10/7747—Organisation of the process, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/52—Surveillance or monitoring of activities, e.g. for recognising suspicious objects
- G06V20/53—Recognition of crowd images, e.g. recognition of crowd congestion
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/50—Context or environment of the image
- G06V20/59—Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30196—Human being; Person
- G06T2207/30201—Face
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30242—Counting objects in image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30248—Vehicle exterior or interior
- G06T2207/30252—Vehicle exterior; Vicinity of vehicle
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/40—Extraction of image or video features
- G06V10/62—Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
Definitions
- the present invention relates to the field of public transportation, and in particular to a method and system for calculating passenger crowding degree.
- the bus intelligent dispatching is to count the number of passengers on each bus, the number of passengers on board and the total number of passengers on the bus through the terminal installed on each bus, so as to realize the current bus routes at various times.
- the passenger load of the segment is monitored.
- the bus intelligent dispatching can also use the historical data of the flow of people getting on and off at each station to conduct deep data mining, and provide a basis for formulating reasonable and efficient bus routes. Therefore, as the most important part of the intelligent bus dispatching system, accurate statistics of the number of passengers on the bus is the key to the realization of the bus intelligent dispatching system.
- the traditional method of passenger flow statistics is to use manual detection, or use contact devices such as slot machines and card readers for statistics.
- the manual statistics method is used to obtain the passenger flow data.
- the accuracy can meet the requirements, it requires a lot of manpower and financial resources, and the cost is high and the real-time performance is poor.
- the infrared detection system although the number of people getting on and off can be counted at the same time.
- the infrared device is easily interfered by external factors, such as continuous passage of personnel or long-term resident of the personnel, which may cause miscalculation and fail to meet the accuracy requirements of the bus intelligent dispatching system for the number of people.
- the infrared system can only count the number of people passing through the door, it cannot judge the direction of the passenger's movement, that is, it cannot The bicycle flow of the bicycle door is counted in both directions, so it cannot be applied to the BRT system without getting on or off the door.
- the rapid transit system is becoming more and more widely used, and the applicability of the means of detecting the number of passengers using infrared rays is getting lower and lower.
- an object of the present invention is to provide a method for calculating passenger crowding degree and a system thereof, which aim to solve the problem that the accuracy of statistical passenger flow in the prior art is not high.
- the invention provides a method for calculating passenger crowding degree, the method comprising:
- the passenger's getting off behavior and the getting on behavior are determined in the area where the target object is located, and the crowdedness of the passengers in the vehicle is determined according to the number of passengers getting on and off.
- the step of reading the collected video data of the passengers getting on and off the vehicle and performing preprocessing of the continuous multi-frame image on the video data specifically includes:
- each pixel in the frame image is a static background or a dynamic foreground
- the step of performing the human head recognition according to the result of the pre-processing and using the detected human head as the target object to be tracked by the mean shift specifically includes:
- Human head recognition based on a cascade classifier that is limited in size of the detection window
- the detected human head is used as the target object to be tracked by the mean shift.
- the step of determining the passenger getting-off behavior and the boarding behavior in the area where the target object is located, and determining the crowding degree of the passengers in the vehicle according to the number of passengers getting on and off the passenger includes:
- Two detection lines are arranged in the shooting area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle;
- the total number of passengers in actual operation is determined by calculating the number of passengers getting off and the number of passengers boarding, and the ratio of the total number of passengers to the maximum passenger capacity in the vehicle is used to measure the crowdedness of passengers in the vehicle.
- the present invention also provides a computing system for passenger congestion, the system comprising:
- a data acquisition module for establishing a video data collection environment and starting to collect video data of passengers getting on and off the vehicle
- a pre-processing module configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data;
- An object determining module configured to perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
- the congestion determination module is configured to determine a passenger's getting off behavior and a boarding behavior in an area where the target object is located, and determine a passenger crowding degree in the vehicle according to the number of passengers getting on and off.
- the preprocessing module comprises:
- the read frame sub-module is configured to read a frame format of the collected video data of the passengers getting on and off the vehicle and determine the number of frames;
- a modeling sub-module for establishing a single Gaussian model for each pixel in the initial frame
- a state sub-module configured to analyze a change of pixel points of consecutive image frames and determine that each pixel in the frame image is a static background or a dynamic foreground;
- the update submodule is configured to record the number of times each pixel in the frame image is continuously determined to be a static background. If the number of times is greater than or equal to a preset threshold, the RGB value of the pixel is immediately updated to the background, if not continuously determined as Static background, then re-recorded.
- the object determining module comprises:
- An identification submodule for performing human head recognition according to a cascade classifier that is limited in size of the detection window
- the target sub-module is used to detect the detected human head as the target object to be tracked by the mean shift.
- the congestion determination module includes:
- a first determining sub-module configured to set two detection lines in a shooting area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, determine that the passenger is getting off or getting on the vehicle;
- a second determining sub-module configured to determine the total number of passengers in actual operation by calculating the number of passengers getting off and the number of people entering the vehicle, and using the ratio between the total number of passengers and the maximum passenger capacity in the vehicle to measure the passengers in the vehicle Congestion.
- the technical solution provided by the invention adopts the pretreatment of removing the static background, can effectively overcome the interference of the change of the intensity of the light and the like on the top recognition of the head in the frame image, and can effectively reduce the false detection and missed detection on the top of the human head by the size limitation of the detection window. And wrong inspection.
- FIG. 1 is a flow chart of a method for calculating a passenger congestion degree according to an embodiment of the present invention
- step S12 shown in FIG. 1 according to an embodiment of the present invention
- step S13 shown in FIG. 1 according to an embodiment of the present invention
- step S14 is a detailed flowchart of step S14 shown in FIG. 1 according to an embodiment of the present invention.
- FIG. 5 is a schematic diagram showing the internal structure of a passenger congestion degree calculation system 10 according to an embodiment of the present invention.
- FIG. 6 is a schematic diagram showing the internal structure of the preprocessing module 12 shown in FIG. 5 according to an embodiment of the present invention
- FIG. 7 is a schematic diagram showing the internal structure of the object determining module 13 shown in FIG. 5 according to an embodiment of the present invention.
- FIG. 8 is a schematic diagram showing the internal structure of the congestion determination module 14 shown in FIG. 5 according to an embodiment of the present invention.
- a specific embodiment of the present invention provides a method for calculating a passenger congestion degree, and the method mainly includes the following steps:
- S13 Perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
- the calculation method of the passenger crowding degree provided by the invention adopts the pretreatment of removing the static background to effectively overcome the interference of the illumination intensity change and the like on the top recognition of the head image in the frame image, and can effectively reduce the top of the head by detecting the size limit of the window. Misdetection, missed detection and misdetection.
- FIG. 1 is a flowchart of a method for calculating a passenger congestion degree according to an embodiment of the present invention.
- step S11 a video data collection environment is established, and video data of passengers getting on and off is started.
- a video data collection environment is established by constructing an embedded vehicle system, and the embedded vehicle system includes hardware modules: two cameras, an embedded device main control module, a video storage module, and an onboard hardware power supply module.
- the two cameras are placed at the top of the front and rear doors respectively, and the camera is 90 degrees to the ground.
- the shooting range covers: the steps of getting on and off the car, part of the road outside the door and part of the car body space inside the door.
- the in-vehicle embedded device collects the video data of the passengers getting on and off by controlling the camera installed at the top of the front and rear doors of the bus, and temporarily stores the video data recorded by the two cameras to the video storage module for calling the video. deal with.
- the video data temporary storage method can effectively reduce the requirements on the embedded hardware, thereby reducing the cost of the device.
- step S12 the collected video data of the passengers getting on and off is read, and the video data is subjected to preprocessing of consecutive multi-frame images.
- the step S12 of reading the collected video data of the passengers getting on and off the vehicle and performing the preprocessing of the continuous multi-frame image on the video data specifically includes S121-S125, as shown in FIG. 2 .
- FIG. 2 is a detailed flowchart of step S12 shown in FIG. 1 according to an embodiment of the present invention.
- step S121 the frame format of the collected video data of the passengers getting on and off is read and the number of frames is determined.
- the embedded device main control module reads the frame format of the video data and determines the number of frames, and determines the size of the frame image m ⁇ n, where m represents the number of rows of the frame image, and n represents the number of columns of the frame image.
- step S122 a single Gaussian model is established for each pixel in the initial frame.
- a single Gaussian model is established for each pixel in the initial frame, and the average value of the initial Gaussian model is the RGB value of the pixel, the variance initial constant V, and then the mixed Gaussian is gradually constructed according to the change of the pixel.
- Density function among them ⁇ (x t , ⁇ i,t , ⁇ i,t ) is the i-th Gaussian distribution at time t, I is a three-dimensional unit matrix, K is the total number of Gaussian distribution patterns, ⁇ i,t , W i,t , ⁇ ⁇ t , ⁇ i, t represent the mean, weight, covariance matrix, and variance of each Gaussian function , respectively.
- step S123 the change of pixel points of consecutive image frames is analyzed and it is determined that each pixel in the frame image is a static background or a dynamic foreground.
- the single Gaussian model of the initial frame is gradually changed to the mixed Gaussian model by successive changes of the image pixels of the plurality of frames, and each pixel has at least K (K ⁇ 1) Gaussian functions, by frame and The change between frames determines that each pixel in the frame image is a static background or a dynamic foreground.
- the RGB values of a total of m ⁇ n pixels of the subsequent frame image are sequentially matched with the K(K ⁇ 1) Gaussian functions of the pixel points corresponding to the previous frame image.
- the weight of the successfully matched Gaussian function W i,t is added .
- Match the remaining remaining (K-1) Gaussian functions update their mean and variance, and determine that the RGB values of the current pixel belong to the static background;
- the matching is unsuccessful, it means that there is no change in the K Gaussian function that can describe the RGB value of the current pixel. Then the Gaussian function with the lowest weight is deleted. The Gaussian function is newly created and the RGB value of the current pixel is taken as its mean value, and V is used as its variance. And determine that the RGB value of the current pixel belongs to the dynamic foreground.
- step S124 the pixel value of the pixel determined as the static background is modified, and the pixel value of the pixel determined as the dynamic foreground is not modified.
- the pixel value of the pixel determined as the static background is modified to RGB s , where RGB s represents the color with the lowest occurrence rate in daily life, and the pixel point determined as the dynamic foreground does not modify its pixel value.
- the color modification made in this step is that the human head detection and the trajectory tracking in the subsequent steps reduce the interference of the static background, which greatly improves the success rate of the subsequent steps.
- step S125 the number of times each pixel point in the frame image is continuously determined as a static background is recorded.
- the number of times is greater than or equal to a preset threshold, the pixel RGB value is immediately updated to the background, and if it is not continuously determined to be static The background is re-recorded.
- the hybrid Gaussian background modeling takes time, and in the real scene, the time for the passenger to get on and off the vehicle may be less than the time of the mixed Gaussian modeling, and a Gaussian function in the original mixed Gaussian modeling algorithm is added.
- the weight growth mechanism that is, each pixel in the recorded frame image is continuously judged If the number of times is a static background, if the number of times is greater than or equal to the preset threshold K, then the RGB value of the pixel is updated immediately, and the RGB value of the pixel is not updated after the Gaussian function reaches the threshold in the original algorithm. If it is not continuously judged as a static background, it is re-recorded.
- each frame image has a total of m ⁇ n pixels, and the number K of Gaussian density functions belonging to each pixel should not exceed four. If more than four, the Gaussian function with the lowest weight is deleted.
- the small and isolated points produced by image preprocessing are eliminated by the image opening and closing operations.
- step S13 human head recognition is performed according to the result of the pre-processing, and the detected human head is used as the target object to be tracked by the mean shift.
- the step S13 of performing the human head recognition based on the result of the preprocessing and using the detected human head as the target object to be tracked by the mean shift specifically includes S131-S134, as shown in FIG.
- FIG. 3 is a detailed flowchart of step S13 shown in FIG. 1 according to an embodiment of the present invention.
- step S131 a cascaded classifier for determining whether or not the head of the human head is made using the collected plurality of positive samples including the human head and negative samples not including the human head.
- the Adaboost training iterative training based on the LBP feature is performed on the collected positive samples including the human head and the negative samples (the size 20*20) that do not include the human head, and the determination is made to determine whether it belongs to the top of the human head. Cascading classifier.
- step S132 a size range of the detection window in the top of the head is detected in the cascade classifier.
- the detected image window size w of the cascade classifier is set to be between W min and W max , and W min and W max are determined according to the size of the bus, through the window w
- the movement on the frame image detects the position of the human head, and the movement detection rule adopts the integral map method. If the human head is not detected, the detection window is enlarged by 1.5 times and cannot exceed the range of W min and W max .
- step S133 human head recognition is performed in accordance with the cascade classifier that is limited in the size of the detection window.
- the human head sample that does not satisfy the size range condition of step S132 and has been determined to be the top of the human head by step S131 is deleted, and the detection of the single-step S131 cascade classifier cannot be determined only.
- the head top samples that must satisfy both of steps S131 and S132 are recognized by the subsequent steps.
- step S134 the detected human head is used as the target object to be tracked by the mean shift (Meanshift).
- step S14 the passenger's getting off behavior and the boarding behavior are determined in the area where the target object is located, and the in-vehicle passenger crowding degree is determined according to the number of passengers getting on and off.
- the step S14 of determining the passenger getting-off behavior and the boarding behavior in the area where the target object is located, and determining the in-vehicle passenger crowding degree according to the number of passengers getting on and off the vehicle specifically includes S141-S142. As shown in Figure 4.
- FIG. 4 is a detailed flowchart of step S14 shown in FIG. 1 according to an embodiment of the present invention.
- step S141 two detection lines are disposed in the imaging area of the camera, and if the centroid of the target object to be tracked by the mean shift crosses the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle.
- two detection lines are disposed in the imaging area of the camera, and are placed on the road surface at a certain distance from the outside of the vehicle door and on the floor of the vehicle at a certain distance from the door, and the detected human head is used as the mean value drift.
- the region histogram of the current frame is calculated by using the estimated center position y 0 of the (n-1) frame target object as the center coordinate of the search window; and the histogram corresponding to the target template and the candidate region template is calculated by the BH coefficient.
- the similarity the larger the BH coefficient is, the higher the similarity is, the position of the maximum BH coefficient is the target new position; by calculating the centroid coordinate of the target object in each frame, the centroid passes through the upper and lower detection lines to determine the passenger getting off behavior. And getting on the bus.
- step S142 the total number of passengers at the time of actual operation is determined by calculating the number of passengers getting off and the number of passengers boarding, and the ratio of the passengers in the vehicle is measured by the ratio between the total number of passengers and the maximum passenger capacity in the vehicle.
- the number of passengers who get off the train is the total number of passengers in the vehicle. Calculating the total number of passengers in actual operation / the maximum passenger capacity of the bus, you can get the congestion factor that describes the congestion in the car. The higher the factor, the more crowded, and vice versa.
- the calculation method of the passenger crowding degree provided by the invention adopts the pretreatment of removing the static background to effectively overcome the interference of the illumination intensity change and the like on the top recognition of the head image in the frame image, and can effectively reduce the top of the head by detecting the size limit of the window. Misdetection, missed detection and misdetection.
- the embodiment of the present invention further provides a calculation system 10 for passenger crowding degree, which mainly includes:
- the data collection module 11 is configured to establish a video data collection environment, and start collecting video data of passengers getting on and off the vehicle;
- the pre-processing module 12 is configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data;
- the object determining module 13 is configured to perform human head recognition according to the result of the preprocessing, and use the detected human head as a target object to be tracked by the mean shift;
- the congestion determination module 14 is configured to determine the passenger's getting off behavior and the boarding behavior in the area where the target object is located, and determine the in-vehicle passenger crowding degree according to the number of passengers getting on and off.
- the calculation system 10 for passenger crowding provided by the present invention can effectively overcome the interference of the top of the head image in the frame image by using the pre-processing of removing the static background, and can effectively reduce the size limit of the detection window. Misdetection, missed detection and misdetection at the top of the head.
- the calculation system 10 for passenger crowding degree mainly includes a data acquisition module 11, a preprocessing module 12, an object determination module 13, and a congestion determination module 14.
- the data collection module 11 is configured to establish a video data collection environment and start collecting video data of passengers getting on and off the vehicle.
- the specific collection method of the video data is described in detail in the foregoing step S11, and the description is not repeated here.
- the pre-processing module 12 is configured to read video data of the collected passengers on and off the vehicle, and perform pre-processing of the continuous multi-frame image on the video data.
- the pre-processing module 12 specifically includes a read frame sub-module 121, a modeling sub-module 122, The status sub-module 123, the modification sub-module 124, and the update sub-module 125 are as shown in FIG. 6.
- FIG. 6 is a schematic diagram showing the internal structure of the pre-processing module 12 shown in FIG. 5 according to an embodiment of the present invention.
- the read frame sub-module 121 is configured to read the frame format of the collected video data of the passengers getting on and off and determine the number of frames.
- the modeling sub-module 122 is configured to establish a single Gaussian model for each pixel in the initial frame.
- the specific method for establishing the Gaussian model is described in detail in the foregoing step S122, and the description is not repeated here.
- the status sub-module 123 is configured to analyze the change of pixel points of consecutive image frames and determine that each pixel point in the frame image is a static background or a dynamic foreground. In the embodiment, the specific method of the determination is described in detail in the foregoing step S123, and the description is not repeated here.
- the modification sub-module 124 is configured to modify the pixel value of the pixel determined as the static background, and the pixel value of the pixel determined as the dynamic foreground is not modified.
- the update sub-module 125 is configured to record the number of times that each pixel in the frame image is continuously determined to be a static background. If the number of times is greater than or equal to a preset threshold, the RGB value of the pixel is immediately updated. For a static background, re-record. In this embodiment, the specific method of updating is described in detail in the foregoing step S125, and the repeated description is not repeated here.
- the object determining module 13 is configured to perform human head recognition according to the result of the pre-processing, and use the detected human head as a target object to be tracked by the mean shift.
- the object determining module 13 specifically includes a production sub-module 131, a defining sub-module 132, an identifying sub-module 133, and a target sub-module 134, as shown in FIG.
- FIG. 7 is a schematic diagram showing the internal structure of the object determining module 13 shown in FIG. 5 according to an embodiment of the present invention.
- the production sub-module 131 is configured to make a cascade classifier for determining whether the top of the human head is determined by using the collected plurality of positive samples including the human head and negative samples not including the human head.
- the positive sample containing the human head and the human head are included in the collected sheets.
- the negative sample (size 20*20) is used for the Adaboost training iterative training based on the LBP feature to create a cascade classifier that determines whether the head is at the top of the head.
- the defining sub-module 132 is configured to define a size range of the detection window in the cascade classifier for detecting the top of the human head.
- the detected image window size w of the cascade classifier is set to be between W min and W max , and W min and W max are determined according to the size of the bus, through the window w
- the movement on the frame image detects the position of the human head, and the movement detection rule adopts the integral map method. If the human head is not detected, the detection window is enlarged by 1.5 times and cannot exceed the range of W min and W max .
- the identification sub-module 133 is configured to perform human head recognition according to a cascade classifier that is limited in size of the detection window.
- the target sub-module 134 is configured to use the detected human head as a target object to be tracked by the mean shift.
- the congestion determination module 14 is configured to determine the passenger's getting off behavior and the boarding behavior in the area where the target object is located, and determine the in-vehicle passenger crowding degree according to the number of passengers getting on and off.
- the congestion determination module 14 specifically includes a first determination sub-module 141 and a second determination sub-module 142, as shown in FIG.
- FIG. 8 is a schematic diagram showing the internal structure of the congestion determination module 14 shown in FIG. 5 according to an embodiment of the present invention.
- the first determining sub-module 141 is configured to set two detection lines in the shooting area of the camera. If the centroid of the target object to be tracked by the mean shift passes through the above two detection lines, it is determined that the passenger is getting off or getting on the vehicle. .
- two detection lines are disposed in the imaging area of the camera, and are placed on the road surface at a certain distance from the outside of the vehicle door and on the floor of the vehicle at a certain distance from the door, and the detected human head is used as the mean value drift.
- the region histogram of the current frame is calculated by using the estimated center position y 0 of the (n-1) frame target object as the center coordinate of the search window; and the histogram corresponding to the target template and the candidate region template is calculated by the BH coefficient.
- the similarity the larger the BH coefficient is, the higher the similarity is, the position of the maximum BH coefficient is the target new position; by calculating the centroid coordinate of the target object in each frame, the centroid passes through the upper and lower detection lines to determine the passenger getting off behavior. And getting on the bus.
- the second determining sub-module 142 is configured to determine the total number of passengers in actual operation by calculating the number of passengers getting off and the number of passengers entering the vehicle, and using the ratio between the total number of passengers and the maximum passenger capacity in the vehicle to measure the interior of the vehicle. Passenger congestion.
- the number of passengers who get off the train is the total number of passengers in the vehicle.
- a crowding factor describing the congestion degree in the vehicle can be obtained. The higher the factor, the more crowded, and vice versa.
- the calculation system 10 for passenger crowding provided by the present invention can effectively overcome the interference of the top of the head image in the frame image by using the pre-processing of removing the static background, and can effectively reduce the size limit of the detection window. Misdetection, missed detection and misdetection at the top of the head.
- each unit included is only divided according to functional logic, but is not limited to the above division, as long as the corresponding function can be implemented; in addition, the specific name of each functional unit is also They are only used to facilitate mutual differentiation and are not intended to limit the scope of the present invention.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Physics & Mathematics (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Artificial Intelligence (AREA)
- Evolutionary Computation (AREA)
- Databases & Information Systems (AREA)
- Software Systems (AREA)
- Medical Informatics (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Computing Systems (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- General Engineering & Computer Science (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims (8)
- 一种乘客拥挤度的计算方法,其特征在于,所述方法包括:建立视频数据采集环境,并开始采集乘客上下车的视频数据;读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
- 如权利要求1所述的乘客拥挤度的计算方法,其特征在于,所述读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理的步骤具体包括:读取采集到的乘客上下车的视频数据的帧格式并确定帧数;对初始帧中的每个像素点建立单高斯模型;通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
- 如权利要求2所述的乘客拥挤度的计算方法,其特征在于,所述根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象的步骤具体包括:利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;根据被限定检测窗口尺寸的级联分类器进行人头部识别;将检测出的人头部作为均值漂移所要跟踪的目标对象。
- 如权利要求3所述的乘客拥挤度的计算方法,其特征在于,所述在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度的步骤具体包括:在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
- 一种乘客拥挤度的计算系统,其特征在于,所述系统包括:数据采集模块,用于建立视频数据采集环境,并开始采集乘客上下车的视频数据;预处理模块,用于读取采集到的乘客上下车的视频数据,并对所述视频数据进行连续多帧图像的预处理;对象确定模块,用于根据预处理的结果进行人头部识别,并将检测出的人头部作为均值漂移所要跟踪的目标对象;拥挤判定模块,用于在所述目标对象所在的区域内判定乘客的下车行为和上车行为,并根据乘客的上下车人数确定车内乘客拥挤度。
- 如权利要求5所述的乘客拥挤度的计算系统,其特征在于,所述预处理模块包括:读帧子模块,用于读取采集到的乘客上下车的视频数据的帧格式并确定帧数;建模子模块,用于对初始帧中的每个像素点建立单高斯模型;状态子模块,用于通过对连续数帧图象像素点的变化进行分析并判定帧图像中每个像素点为静态背景或者为动态前景;修改子模块,用于对判定为静态背景的像素点的像素值进行修改,对判定为动态前景的像素点的像素值不修改;更新子模块,用于记录帧图像中每个像素点被连续判断为静态背景的次数,当该次数如果大于或者等于预设阈值,则立即更新该像素点RGB值为背景,若非被连续判断为静态背景,则重新记录。
- 如权利要求6所述的乘客拥挤度的计算系统,其特征在于,所述对象确定模块包括:制作子模块,用于利用收集到的多张包含人头部的正样本和不包含人头部的负样本制作用于判定是否人头顶部的级联分类器;限定子模块,用于限定所述级联分类器中检测人头顶部的检测窗口的尺寸范围;识别子模块,用于根据被限定检测窗口尺寸的级联分类器进行人头部识别;目标子模块,用于将检测出的人头部作为均值漂移所要跟踪的目标对象。
- 如权利要求7所述的乘客拥挤度的计算系统,其特征在于,所述拥挤判定模块包括:第一判定子模块,用于在摄像头所拍摄区域中设置两条检测线,如果所述均值漂移所要跟踪的目标对象的质心穿越了以上两条检测线,则判定乘客为下车或者上车;第二判定子模块,用于通过计算乘客的下车人数和上车人数来确定实际运营时的乘客总数,并利用所述乘客总数与车内最大载客量之间的比值来衡量车内乘客拥挤度。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/076740 WO2017156772A1 (zh) | 2016-03-18 | 2016-03-18 | 一种乘客拥挤度的计算方法及其系统 |
JP2018502630A JP6570731B2 (ja) | 2016-03-18 | 2016-03-18 | 乗客の混雑度の算出方法及びそのシステム |
US15/628,605 US10223597B2 (en) | 2016-03-18 | 2017-06-20 | Method and system for calculating passenger crowdedness degree |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/CN2016/076740 WO2017156772A1 (zh) | 2016-03-18 | 2016-03-18 | 一种乘客拥挤度的计算方法及其系统 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US15/628,605 Continuation US10223597B2 (en) | 2016-03-18 | 2017-06-20 | Method and system for calculating passenger crowdedness degree |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2017156772A1 true WO2017156772A1 (zh) | 2017-09-21 |
Family
ID=59852232
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2016/076740 WO2017156772A1 (zh) | 2016-03-18 | 2016-03-18 | 一种乘客拥挤度的计算方法及其系统 |
Country Status (3)
Country | Link |
---|---|
US (1) | US10223597B2 (zh) |
JP (1) | JP6570731B2 (zh) |
WO (1) | WO2017156772A1 (zh) |
Cited By (9)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345878A (zh) * | 2018-04-16 | 2018-07-31 | 泰华智慧产业集团股份有限公司 | 基于视频的公共交通工具客流量监测方法及系统 |
CN111046788A (zh) * | 2019-12-10 | 2020-04-21 | 北京文安智能技术股份有限公司 | 一种停留人员的检测方法、装置及系统 |
CN112183469A (zh) * | 2020-10-27 | 2021-01-05 | 华侨大学 | 一种公共交通的拥挤度识别及自适应调整方法、系统、设备及计算机可读存储介质 |
CN112434566A (zh) * | 2020-11-04 | 2021-03-02 | 深圳云天励飞技术股份有限公司 | 客流统计方法、装置、电子设备及存储介质 |
CN112580633A (zh) * | 2020-12-25 | 2021-03-30 | 博大视野(厦门)科技有限公司 | 一种公共交通客流统计装置及方法 |
CN112785462A (zh) * | 2020-02-27 | 2021-05-11 | 吴秋琴 | 基于大数据的景区客流量统计评估系统 |
CN113255480A (zh) * | 2021-05-11 | 2021-08-13 | 中国联合网络通信集团有限公司 | 公交车内拥挤程度识别方法、系统、计算机设备及介质 |
CN113420693A (zh) * | 2021-06-30 | 2021-09-21 | 成都新潮传媒集团有限公司 | 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备 |
CN114596537A (zh) * | 2022-05-10 | 2022-06-07 | 深圳市海清视讯科技有限公司 | 区域人流数据确定方法、装置、设备及存储介质 |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10460198B2 (en) * | 2015-12-23 | 2019-10-29 | Fotonation Limited | Image processing system |
US10699572B2 (en) | 2018-04-20 | 2020-06-30 | Carrier Corporation | Passenger counting for a transportation system |
CN108960133B (zh) * | 2018-07-02 | 2022-01-11 | 京东方科技集团股份有限公司 | 乘客流量监控的方法、电子设备、系统以及存储介质 |
CN110895662A (zh) * | 2018-09-12 | 2020-03-20 | 杭州海康威视数字技术股份有限公司 | 车辆超载报警方法、装置、电子设备及存储介质 |
CN109684998A (zh) * | 2018-12-21 | 2019-04-26 | 佛山科学技术学院 | 一种基于图像处理的地铁人流管理的系统和方法 |
CN110490110A (zh) * | 2019-01-29 | 2019-11-22 | 王馨悦 | 基于人体工程学特征检测的客流计数装置及方法 |
JP7200715B2 (ja) | 2019-02-05 | 2023-01-10 | トヨタ自動車株式会社 | 情報処理システム、プログラム、及び車両 |
CN111079488B (zh) * | 2019-05-27 | 2023-09-26 | 广东快通信息科技有限公司 | 一种基于深度学习的公交客流检测系统及方法 |
CN112026686B (zh) * | 2019-06-04 | 2022-04-12 | 上海汽车集团股份有限公司 | 一种自动调节车辆座椅位置的方法及装置 |
CN112347814A (zh) * | 2019-08-07 | 2021-02-09 | 中兴通讯股份有限公司 | 客流估计与展示方法、系统及计算机可读存储介质 |
CN110633671A (zh) * | 2019-09-16 | 2019-12-31 | 天津通卡智能网络科技股份有限公司 | 基于深度图像的公交车客流实时统计方法 |
CN112541374B (zh) * | 2019-09-20 | 2024-04-30 | 南京行者易智能交通科技有限公司 | 一种基于深度学习的乘客属性的获取方法、装置及模型训练方法 |
CN110852155A (zh) * | 2019-09-29 | 2020-02-28 | 深圳市深网视界科技有限公司 | 公交乘客拥挤度检测方法、系统、装置及存储介质 |
CN110930432A (zh) * | 2019-11-19 | 2020-03-27 | 北京文安智能技术股份有限公司 | 一种视频分析方法、装置及系统 |
KR102470251B1 (ko) * | 2019-12-18 | 2022-11-23 | 한국교통대학교 산학협력단 | 플랫폼 대기승객 화상정보를 기반으로 한 기계학습을 이용한 스크린도어 개폐 시간 제어 방법 및 이를 위한 컴퓨팅장치 |
CN111079696B (zh) * | 2019-12-30 | 2023-03-24 | 深圳市昊岳电子有限公司 | 一种基于车辆监控人员拥挤度的检测方法 |
CN111611951A (zh) * | 2020-05-27 | 2020-09-01 | 中航信移动科技有限公司 | 一种基于机器视觉的安检人流量实时监控系统及方法 |
CN112241688A (zh) * | 2020-09-24 | 2021-01-19 | 厦门卫星定位应用股份有限公司 | 车厢拥挤度检测方法及系统 |
CN112565715A (zh) * | 2020-12-30 | 2021-03-26 | 浙江大华技术股份有限公司 | 一种景点客流量监控方法、装置、电子设备及存储介质 |
CN112766950A (zh) * | 2020-12-31 | 2021-05-07 | 广州广电运通智能科技有限公司 | 一种动态路径费用确定方法、装置、设备及介质 |
CN113470222A (zh) * | 2021-05-24 | 2021-10-01 | 支付宝(杭州)信息技术有限公司 | 公共交通车辆客流统计系统和方法 |
CN114299746B (zh) * | 2021-12-30 | 2023-04-14 | 武汉长飞智慧网络技术有限公司 | 基于图像识别的公交车辆调度方法、设备及介质 |
CN115072510B (zh) * | 2022-06-08 | 2023-04-18 | 上海交通大学 | 基于门开关的电梯轿厢乘客智能识别与分析方法及系统 |
CN115829210B (zh) * | 2023-02-20 | 2023-04-28 | 安徽阿瑞特汽车电子科技有限公司 | 一种汽车行驶过程智能监测管理方法、系统及存储介质 |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324121A (zh) * | 2011-04-29 | 2012-01-18 | 重庆市科学技术研究院 | 一种公交车内拥挤程度检测方法 |
CN102622578A (zh) * | 2012-02-06 | 2012-08-01 | 中山大学 | 一种乘客计数系统及方法 |
CN104504377A (zh) * | 2014-12-25 | 2015-04-08 | 中邮科通信技术股份有限公司 | 一种公交车乘客拥挤程度识别系统及方法 |
CN104724566A (zh) * | 2013-12-24 | 2015-06-24 | 株式会社日立制作所 | 具备图像识别功能的电梯 |
Family Cites Families (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP3121190B2 (ja) * | 1993-12-15 | 2000-12-25 | 松下電器産業株式会社 | 通過人数検知センサおよびその連動装置 |
JP2001258016A (ja) * | 2000-03-14 | 2001-09-21 | Japan Radio Co Ltd | 監視カメラシステム及びその制御方法 |
US20090041297A1 (en) * | 2005-05-31 | 2009-02-12 | Objectvideo, Inc. | Human detection and tracking for security applications |
CN101464946B (zh) * | 2009-01-08 | 2011-05-18 | 上海交通大学 | 基于头部识别和跟踪特征的检测方法 |
RU2484531C2 (ru) * | 2009-01-22 | 2013-06-10 | Государственное научное учреждение центральный научно-исследовательский и опытно-конструкторский институт робототехники и технической кибернетики (ЦНИИ РТК) | Устройство обработки видеоинформации системы охранной сигнализации |
US8659643B2 (en) * | 2011-01-18 | 2014-02-25 | Disney Enterprises, Inc. | Counting system for vehicle riders |
US20130128050A1 (en) * | 2011-11-22 | 2013-05-23 | Farzin Aghdasi | Geographic map based control |
CN103854273B (zh) * | 2012-11-28 | 2017-08-25 | 天佑科技股份有限公司 | 一种近正向俯视监控视频行人跟踪计数方法和装置 |
JP5674857B2 (ja) * | 2013-05-10 | 2015-02-25 | 技研トラステム株式会社 | 乗降者計数装置 |
JP6264181B2 (ja) * | 2014-05-07 | 2018-01-24 | 富士通株式会社 | 画像処理装置、画像処理方法及び画像処理プログラム |
EP2975561B1 (en) * | 2014-07-14 | 2023-12-13 | Gerrit Böhm | Capacity prediction for public transport vehicles |
CN104268506B (zh) * | 2014-09-15 | 2017-12-15 | 郑州天迈科技股份有限公司 | 基于深度图像的客流计数检测方法 |
-
2016
- 2016-03-18 JP JP2018502630A patent/JP6570731B2/ja not_active Expired - Fee Related
- 2016-03-18 WO PCT/CN2016/076740 patent/WO2017156772A1/zh active Application Filing
-
2017
- 2017-06-20 US US15/628,605 patent/US10223597B2/en not_active Expired - Fee Related
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN102324121A (zh) * | 2011-04-29 | 2012-01-18 | 重庆市科学技术研究院 | 一种公交车内拥挤程度检测方法 |
CN102622578A (zh) * | 2012-02-06 | 2012-08-01 | 中山大学 | 一种乘客计数系统及方法 |
CN104724566A (zh) * | 2013-12-24 | 2015-06-24 | 株式会社日立制作所 | 具备图像识别功能的电梯 |
CN104504377A (zh) * | 2014-12-25 | 2015-04-08 | 中邮科通信技术股份有限公司 | 一种公交车乘客拥挤程度识别系统及方法 |
Cited By (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108345878B (zh) * | 2018-04-16 | 2020-03-24 | 泰华智慧产业集团股份有限公司 | 基于视频的公共交通工具客流量监测方法及系统 |
CN108345878A (zh) * | 2018-04-16 | 2018-07-31 | 泰华智慧产业集团股份有限公司 | 基于视频的公共交通工具客流量监测方法及系统 |
CN111046788A (zh) * | 2019-12-10 | 2020-04-21 | 北京文安智能技术股份有限公司 | 一种停留人员的检测方法、装置及系统 |
CN112785462A (zh) * | 2020-02-27 | 2021-05-11 | 吴秋琴 | 基于大数据的景区客流量统计评估系统 |
CN112183469A (zh) * | 2020-10-27 | 2021-01-05 | 华侨大学 | 一种公共交通的拥挤度识别及自适应调整方法、系统、设备及计算机可读存储介质 |
CN112183469B (zh) * | 2020-10-27 | 2023-07-28 | 华侨大学 | 一种公共交通的拥挤度识别及自适应调整方法 |
CN112434566A (zh) * | 2020-11-04 | 2021-03-02 | 深圳云天励飞技术股份有限公司 | 客流统计方法、装置、电子设备及存储介质 |
CN112434566B (zh) * | 2020-11-04 | 2024-05-07 | 深圳云天励飞技术股份有限公司 | 客流统计方法、装置、电子设备及存储介质 |
CN112580633A (zh) * | 2020-12-25 | 2021-03-30 | 博大视野(厦门)科技有限公司 | 一种公共交通客流统计装置及方法 |
CN112580633B (zh) * | 2020-12-25 | 2024-03-01 | 博大视野(厦门)科技有限公司 | 一种基于深度学习的公共交通客流统计装置及方法 |
CN113255480A (zh) * | 2021-05-11 | 2021-08-13 | 中国联合网络通信集团有限公司 | 公交车内拥挤程度识别方法、系统、计算机设备及介质 |
CN113420693A (zh) * | 2021-06-30 | 2021-09-21 | 成都新潮传媒集团有限公司 | 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备 |
CN113420693B (zh) * | 2021-06-30 | 2022-04-15 | 成都新潮传媒集团有限公司 | 一种门状态检测方法、装置、轿厢乘客流量统计方法及设备 |
CN114596537A (zh) * | 2022-05-10 | 2022-06-07 | 深圳市海清视讯科技有限公司 | 区域人流数据确定方法、装置、设备及存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP6570731B2 (ja) | 2019-09-04 |
JP2018523234A (ja) | 2018-08-16 |
US10223597B2 (en) | 2019-03-05 |
US20170286780A1 (en) | 2017-10-05 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2017156772A1 (zh) | 一种乘客拥挤度的计算方法及其系统 | |
CN111553387B (zh) | 一种基于Yolov3的人员目标检测方法 | |
CN105844229B (zh) | 一种乘客拥挤度的计算方法及其系统 | |
CN111444821A (zh) | 一种城市道路标志自动识别方法 | |
Li et al. | Robust people counting in video surveillance: Dataset and system | |
CN104978567A (zh) | 基于场景分类的车辆检测方法 | |
CN104239867B (zh) | 车牌定位方法及系统 | |
CN103985182B (zh) | 一种公交客流自动计数方法及自动计数系统 | |
WO2023109099A1 (zh) | 基于非侵入式检测的充电负荷概率预测系统及方法 | |
CN102254428B (zh) | 一种基于视频处理的交通拥塞检测方法 | |
CN103268489A (zh) | 基于滑窗搜索的机动车号牌识别方法 | |
CN104239905A (zh) | 运动目标识别方法及具有该功能的电梯智能计费系统 | |
CN103886089B (zh) | 基于学习的行车记录视频浓缩方法 | |
CN106127812A (zh) | 一种基于视频监控的客运站非出入口区域的客流统计方法 | |
CN106710228B (zh) | 一种客货分道交通参数监测系统的实现方法 | |
CN114170580A (zh) | 一种面向高速公路的异常事件检测方法 | |
CN114049572A (zh) | 识别小目标的检测方法 | |
CN104123714A (zh) | 一种人流量统计中最优目标检测尺度的生成方法 | |
Orozco et al. | Vehicular detection and classification for intelligent transportation system: A deep learning approach using faster R-CNN model | |
CN111695545A (zh) | 一种基于多目标追踪的单车道逆行检测方法 | |
CN106384089A (zh) | 基于终生学习的人体可靠检测方法 | |
CN117292322A (zh) | 基于深度学习的人员流量检测方法及系统 | |
CN115880620B (zh) | 一种应用在推车预警系统中的人员计数方法 | |
CN116311166A (zh) | 交通障碍物识别方法、装置及电子设备 | |
Li et al. | An efficient self-learning people counting system |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2018502630 Country of ref document: JP Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16893937 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205A DATED 22/01/2019) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16893937 Country of ref document: EP Kind code of ref document: A1 |