CN109241858A - A kind of passenger flow density detection method and device based on rail transit train - Google Patents

A kind of passenger flow density detection method and device based on rail transit train Download PDF

Info

Publication number
CN109241858A
CN109241858A CN201810917522.6A CN201810917522A CN109241858A CN 109241858 A CN109241858 A CN 109241858A CN 201810917522 A CN201810917522 A CN 201810917522A CN 109241858 A CN109241858 A CN 109241858A
Authority
CN
China
Prior art keywords
image
human body
compartment
body face
training
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201810917522.6A
Other languages
Chinese (zh)
Inventor
宋旭军
杨智
陈明
李腾
喻坚华
文小勇
董卓
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Xindatong Information Technology Co ltd
Original Assignee
Hunan Xindatong Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Xindatong Information Technology Co ltd filed Critical Hunan Xindatong Information Technology Co ltd
Priority to CN201810917522.6A priority Critical patent/CN109241858A/en
Publication of CN109241858A publication Critical patent/CN109241858A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/10Human or animal bodies, e.g. vehicle occupants or pedestrians; Body parts, e.g. hands
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/59Context or environment of the image inside of a vehicle, e.g. relating to seat occupancy, driver state or inner lighting conditions

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Biology (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Human Computer Interaction (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a kind of passenger flow density detection method and device based on rail transit train, which comprises S1: compartment image is collected out of different compartment;S2: the image being collected into is divided into training set and test set, and is labeled by human body face/header information of the pixel to every image in the training set and the test set;S3: gaussian filtering is carried out to the human body face in the image after mark/header information by gaussian kernel function, obtains human body face/head feature data;S4: the human body face/head feature data in the training set, the convolutional neural networks model after being trained are inputted into the deep learning model being made of convolutional neural networks model;S5: the human body face in the test set/head feature data are input to the convolutional neural networks model after the training, to export the passenger flow diagram of compartment image.

Description

A kind of passenger flow density detection method and device based on rail transit train
Technical field
The present invention relates to density data detection technique field more particularly to a kind of intensity of passenger flow based on rail transit train Detection method and device.
Background technique
Subway greatly facilitates the trip of people, increases severely for commuter time section and festivals or holidays ridership, in order to close Reason utilizes the resource of subway carriage, improves the trip experience sense of passenger, real-time detection is answered to go out the passenger flow number in each section compartment, side Just prompt passengers are waited more rationally more comfortable in which section carriage and platform.Currently, mainly special by traditional part for pedestrian detection Sign such as histograms of oriented gradients (HOG), local binary feature (LBP), harr small echo etc. are combined with traditional classifier, than Such as support vector machines (SVM).HOG is combined with SVM has been achieved with immense success in pedestrian's context of detection, but in subway carriage ring Under border, the serious shielding of passenger's body influences the local shape factor of passenger's body, and then causes classifier that can not train.In addition A kind of region convolutional neural networks (RCNN, Fast-Rcnn, Faster-Rcnn) series, this algorithm generally pass through extraction simultaneously The provincial characteristics of study image carries out detection identification to image, and in the case where compartment is not crowded, this algorithm passes through small Box, which counts the number of people to human body face/head detection, preferable precision.But when compartment is crowded, part human body face/ When head is blocked, the regional frame in RCNN can be easy part human body face/head that missing inspection is blocked, and cause the precision of algorithm Sharply decline, and RCNN efficiency of algorithm is lower, is not readily used for practical application.YOLO algorithm is had been used to due to its test speed Production application, but precision is too low in terms of human body face/head detection.
Summary of the invention
The present invention is directed to current technology problem, provides a kind of passenger flow density detection method based on rail transit train And device.This method solve the technical issues of lower using the arithmetic accuracy of the prior art, low efficiency is not easy to practical application.
In a first aspect, the present invention provides a kind of passenger flow density detection methods based on rail transit train, comprising:
S1: compartment image is collected out of different compartment;
S2: the image being collected into is divided into training set and test set, and by pixel to the training set and the survey Human body face/the header information for trying the every image concentrated is labeled;
S3: gaussian filtering is carried out to the human body face in the image after mark/header information by gaussian kernel function, is obtained Human body face/head feature data;
S4: the human body in the training set is inputted into the deep learning model being made of convolutional neural networks model Face/head feature data, the convolutional neural networks model after being trained;
S5: the human body face in the test set/head feature data are input to the convolutional Neural net after the training Network model, to export the passenger flow diagram of compartment image.
Further, after the step S1 further include:
The pretreatment of pixel adjustment is carried out to the compartment image being collected into.
It further, is the compartment image of 1280*720 pixel by the pretreated compartment image.
Further, after the step S5 further include:
The total number of persons in compartment is obtained by regression calculation.
Second aspect, the intensity of passenger flow detection device based on rail transit train that the invention discloses a kind of, comprising:
Collection module, for collecting compartment image out of different compartment;
Labeling module, for the image being collected into be divided into training set and test set, and by pixel to the training Human body face/header information of every image in collection and the test set is labeled;
Gaussian filtering module, for by gaussian kernel function to human body face/header information in the image after mark into Row gaussian filtering obtains human body face/head feature data;
Training module, for being inputted in the training set into the deep learning model being made of convolutional neural networks model The human body face/head feature data, the convolutional neural networks model after being trained;
Output module, after the human body face in the test set/head feature data are input to the training Convolutional neural networks model, to export the passenger flow diagram of compartment image.
Further, further includes:
Preprocessing module, for carrying out the pretreatment of pixel adjustment to the compartment image being collected into.
It further, is the compartment figure of 1280*720 pixel by the preprocessing module treated the compartment image Picture.
Further, further includes:
Regression calculation module, for obtaining the total number of persons in compartment by regression calculation.
The beneficial effects of the present invention are:
The present invention uses box to mark human body face/head method compared with prior art, marks human body face by red dot Portion/head convenience and high-efficiency, and in Dense crowd, red dot can accurately mark out human body face/head of partial occlusion Portion, this is that effect be not achieved is marked using box.
In addition, present invention employs gaussian kernel functions to filter out human body face/head feature, reduce other noises in image Interference, improve the learning efficiency of deep learning model.Other network models are compared, in multiple row model in unused branch Many neurodes are not activated in the training process, but occupy a large amount of memory and parameter, reduce training speed, because This present invention is using single-row network model, by constantly training adjustment Gaussian kernel and convolution kernel, so that network performance reaches To best.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of the passenger flow density detection method based on rail transit train provided by the invention;
Fig. 2 is a kind of structural schematic diagram of the intensity of passenger flow detection device based on rail transit train provided by the invention.
Specific embodiment
In being described below, for illustration and not for limitation, propose such as specific system structure, interface, technology it The detail of class, to understand thoroughly the present invention.However, it will be clear to one skilled in the art that there is no these specific The present invention also may be implemented in the other embodiments of details.In other situations, omit to well-known device, circuit and The detailed description of method, in case unnecessary details interferes description of the invention.
Fig. 1 is a kind of flow diagram of the passenger flow density detection method based on rail transit train provided by the invention.
As shown in Figure 1, the present invention provides a kind of passenger flow density detection method based on rail transit train, the method Include:
S1: compartment image is collected out of different compartment;
S2: the image being collected into is divided into training set and test set, and by pixel to the training set and the survey Human body face/the header information for trying the every image concentrated is labeled;
S3: gaussian filtering is carried out to the human body face in the image after mark/header information by gaussian kernel function, is obtained Human body face/head feature data;
S4: the human body in the training set is inputted into the deep learning model being made of convolutional neural networks model Face/head feature data, the convolutional neural networks model after being trained;
S5: the human body face in the test set/head feature data are input to the convolutional Neural net after the training Network model, to export the passenger flow diagram of compartment image.
About convolutional neural networks structure of the invention, the present invention includes six using six layers of convolutional network structure altogether Layer convolutional layer, the structure of entire convolutional neural networks are input layer, the first convolutional layer, the first pond layer, the second convolutional layer, second Pond layer, third convolutional layer, Volume Four lamination, the 5th convolutional layer, the 6th convolutional layer, output layer.What each convolutional layer successively used It is the convolution kernel of 11 × 11,11 × 11,9 × 9,1 × 1,1 × 1,1 × 1 size, too small convolution kernel cannot extract people well Honorable portion/header data carries out abstract-learning, and excessive convolution kernel increases network parameter, influences network detection efficiency and human body Face/head detection precision is not improved.Two pond layers are used in network and carry out down-sampling, and pond window 2 × 2 will Original image is long and width is reduced into original a quarter.And network structure end 1 × 1 full articulamentum of convolution nuclear subsitution, pole The big parameter for reducing whole network structure, and there are also slightly promoted precision when with 1 × 1 convolution kernel than full articulamentum.
The present invention uses box to mark human body face/head method compared with prior art, marks human body face by red dot Portion/head convenience and high-efficiency, and in Dense crowd, red dot can accurately mark out human body face/head of partial occlusion Portion, this is that effect be not achieved is marked using box.
In addition, present invention employs gaussian kernel functions to filter out human body face/head feature, reduce other noises in image Interference, improve the learning efficiency of deep learning model.Other network models are compared, in multiple row model in unused branch Many neurodes are not activated in the training process, but occupy a large amount of memory and parameter, reduce training speed, because This present invention is using single-row network model, by constantly training adjustment Gaussian kernel and convolution kernel, so that network performance reaches To best.
In some illustrative embodiments, after the step S1 further include:
The pretreatment of pixel adjustment is carried out to the compartment image being collected into.
It in some illustrative embodiments, is the vehicle of 1280*720 pixel by the pretreated compartment image Compartment image.
In some illustrative embodiments, after the step S5 further include:
The total number of persons in compartment is obtained by regression calculation.
The coordinate information of human body head is obtained after red dot marks, based on red dot co-ordinate position information by Gauss Reason generates the data of a matrix around the human body head of a people, and the position data closer to red dot coordinate is bigger, and every The pixel value of one matrix adds up and is 1.After Gauss is handled, the image of every 1080*720 becomes the square of 1080*720 Battle array, the human body head of a people just become and for 1 a matrix, the matrix other there is no human body face/head part Data be 0, i.e., only 0 and very little numerical value composition, be converted to the density map that image is exactly black-white colors.Regression calculation is By pixel value in the density map that adds up and and by round, obtained value is exactly total number of persons value.
After the human body face/head position marked in compartment passes through gaussian filtering, a human body face/head corresponding one A matrix, the pixel added up in matrix and be 1, and whole image predicts all people by convolutional neural networks, is corresponded to Density map (in figure include multiple matrixes, detect a human body face/head be a matrix, but in matrix and approximation 1 and And it is not equal to 1, it is that 0), whole image of statistical regression passes through convolutional Neural that the pixel value at human body face/head, which is not detected, all Network output density map pixel value and, carry out round number, obtained integer just count images in human body face/ Head number.
For intensity of passenger flow detection is according to Changsha subway carriage:
1. acquiring a large amount of Changsha subway carriage images first, by video framing, 2,800 images are obtained, wherein Two thousand sheets of training set, test set 800.
2. the head of the human body in pair all images is labeled, and generates position and the red dot of the red dot of head/face Mat (matlab) file of number, wherein red dot position is to do gaussian filtering in next step to prepare, and red dot number is statistics compartment Number, each red dot represent a people, and an image is corresponding to generate a .mat file.
3. generating .csv file by gaussian filtering with original image and mat file, data are done for convolutional neural networks training Pretreatment.After csv file is handled through Gauss, each red dot generates a matrix with numerical value, the sum of data in a matrix It is 1, remaining place data is then 0.
4. designing convolutional network model.
Other opposite convolutional networks, multiple row network or more depth network structure, network proposed by the present invention are simple and efficient, In the case where precision obtains preferable situation, while there are preferable timeliness and lower memory again, on Nvidia 1080Ti video card Detect the time that individual 1080p image only needs 80ms or so.
5 training networks
Data prediction has prepared the required data of training, while entire convolutional network design is completed, and next needs to instruct Practice entire convolutional network, that is, obtain optimal parameter for whole network, enables to predict the number come and true people Number error is minimum, meanwhile, the set of whole network parameter is called weight model.
Training process: first expanding data, and by each 1080*720 image, three 640*352 are cut into life at random Small figure (had between small figure part repetition), it is 6000 images that 2000 thousand sheets training images, which just increase, takes 570,000 works For training image collection, 30,000 are used to select as proof diagram image set the best initial weights model for training and, and such benefit is to increase Training sample has been added to improve the robustness of entire model, while small image can improve training speed again, obtained weight model .h5 file.
6. test
800 test image testing algorithm precision are taken, 91.6% can be reached, accuracy computation formula:
Wherein M is the number of test image, pi、giFor i-th image Detection number and effective strength.
Test process: trained best weight value model .h5 file is imported into convolutional network model and activates network, so It is tested test image importing network to obtain detection number p afterwards.
In addition, as shown in Fig. 2, the invention discloses a kind of intensity of passenger flow detection device based on rail transit train, institute Stating device includes:
Collection module 100, for collecting compartment image out of different compartment;
Labeling module 200, for the image being collected into be divided into training set and test set, and by pixel to the instruction Human body face/the header information for practicing every image in collection and the test set is labeled;
Gaussian filtering module 300, for passing through gaussian kernel function to human body face/header information in the image after mark Gaussian filtering is carried out, human body face/head feature data are obtained;
Training module 400, for inputting the training into the deep learning model being made of convolutional neural networks model The human body face/head feature the data concentrated, the convolutional neural networks model after being trained;
Output module 500, after the human body face in the test set/head feature data are input to the training Convolutional neural networks model, to export the passenger flow diagram of compartment image.
In some illustrative embodiments, further includes:
Preprocessing module, for carrying out the pretreatment of pixel adjustment to the compartment image being collected into.
It in some illustrative embodiments, is 1280*720 by the preprocessing module treated the compartment image The compartment image of pixel.
In some illustrative embodiments, further includes:
Regression calculation module, for obtaining the total number of persons in compartment by regression calculation.
Reader should be understood that in the description of this specification reference term " one embodiment ", " is shown " some embodiments " The description of example ", " specific example " or " some examples " etc. mean specific features described in conjunction with this embodiment or example, structure, Material or feature are included at least one embodiment or example of the invention.In the present specification, above-mentioned term is shown The statement of meaning property need not be directed to identical embodiment or example.Moreover, particular features, structures, materials, or characteristics described It may be combined in any suitable manner in any one or more of the embodiments or examples.In addition, without conflicting with each other, this The technical staff in field can be by the spy of different embodiments or examples described in this specification and different embodiments or examples Sign is combined.
Although the embodiments of the present invention has been shown and described above, it is to be understood that above-described embodiment is example Property, it is not considered as limiting the invention, those skilled in the art within the scope of the invention can be to above-mentioned Embodiment is changed, modifies, replacement and variant.

Claims (8)

1. a kind of passenger flow density detection method based on rail transit train, which is characterized in that the described method includes:
S1: compartment image is collected out of different compartment;
S2: the image being collected into is divided into training set and test set, and by pixel to the training set and the test set In human body face/header information of every image be labeled;
S3: gaussian filtering is carried out to the human body face in the image after mark/header information by gaussian kernel function, obtains human body Face/head feature data;
S4: the human body face in the training set is inputted into the deep learning model being made of convolutional neural networks model Portion/head feature data, the convolutional neural networks model after being trained;
S5: the human body face in the test set/head feature data are input to the convolutional neural networks mould after the training Type, to export the passenger flow diagram of compartment image.
2. the method according to claim 1, wherein after the step S1 further include:
The pretreatment of pixel adjustment is carried out to the compartment image being collected into.
3. according to the method described in claim 2, it is characterized in that, being by the pretreated compartment image The compartment image of 1280*720 pixel.
4. the method according to claim 1, wherein after the step S5 further include:
The total number of persons in compartment is obtained by regression calculation.
5. a kind of intensity of passenger flow detection device based on rail transit train characterized by comprising
Collection module, for collecting compartment image out of different compartment;
Labeling module, for the image being collected into be divided into training set and test set, and by pixel to the training set and Human body face/header information of every image in the test set is labeled;
Gaussian filtering module is high for being carried out by gaussian kernel function to the human body face in the image after mark/header information This filtering, obtains human body face/head feature data;
Training module, for inputting the institute in the training set into the deep learning model being made of convolutional neural networks model State human body face/head feature data, the convolutional neural networks model after being trained;
Output module, for the human body face in the test set/head feature data to be input to the convolution after the training Neural network model, to export the passenger flow diagram of compartment image.
6. device according to claim 5, which is characterized in that further include:
Preprocessing module, for carrying out the pretreatment of pixel adjustment to the compartment image being collected into.
7. device according to claim 6, which is characterized in that by the preprocessing module treated the compartment figure Compartment image as being 1280*720 pixel.
8. device according to claim 5, which is characterized in that further include:
Regression calculation module, for obtaining the total number of persons in compartment by regression calculation.
CN201810917522.6A 2018-08-13 2018-08-13 A kind of passenger flow density detection method and device based on rail transit train Pending CN109241858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201810917522.6A CN109241858A (en) 2018-08-13 2018-08-13 A kind of passenger flow density detection method and device based on rail transit train

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201810917522.6A CN109241858A (en) 2018-08-13 2018-08-13 A kind of passenger flow density detection method and device based on rail transit train

Publications (1)

Publication Number Publication Date
CN109241858A true CN109241858A (en) 2019-01-18

Family

ID=65070872

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810917522.6A Pending CN109241858A (en) 2018-08-13 2018-08-13 A kind of passenger flow density detection method and device based on rail transit train

Country Status (1)

Country Link
CN (1) CN109241858A (en)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110210603A (en) * 2019-06-10 2019-09-06 长沙理工大学 Counter model construction method, method of counting and the device of crowd
CN111079488A (en) * 2019-05-27 2020-04-28 陕西科技大学 Bus passenger flow detection system and method based on deep learning
CN111079540A (en) * 2019-11-19 2020-04-28 北航航空航天产业研究院丹阳有限公司 Target characteristic-based layered reconfigurable vehicle-mounted video target detection method
CN111144188A (en) * 2019-05-07 2020-05-12 王青雷 Wireless notification platform based on big data analysis
US20200186743A1 (en) * 2016-11-24 2020-06-11 Hanwha Techwin Co., Ltd. Apparatus and method for displaying images and passenger density
CN111582778A (en) * 2020-04-17 2020-08-25 上海中通吉网络技术有限公司 Operation site cargo accumulation measuring method, device, equipment and storage medium
CN111640101A (en) * 2020-05-29 2020-09-08 苏州大学 Ghost convolution characteristic fusion neural network-based real-time traffic flow detection system and method
CN112699741A (en) * 2020-12-10 2021-04-23 广州广电运通金融电子股份有限公司 Method, system and equipment for calculating internal congestion degree of bus

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069151A1 (en) * 2010-09-21 2012-03-22 Chih-Hsiang Tsai Method for intensifying identification of three-dimensional objects
CN103295031A (en) * 2013-04-15 2013-09-11 浙江大学 Image object counting method based on regular risk minimization
US20140071242A1 (en) * 2012-09-07 2014-03-13 National Chiao Tung University Real-time people counting system using layer scanning method
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120069151A1 (en) * 2010-09-21 2012-03-22 Chih-Hsiang Tsai Method for intensifying identification of three-dimensional objects
US20140071242A1 (en) * 2012-09-07 2014-03-13 National Chiao Tung University Real-time people counting system using layer scanning method
CN103295031A (en) * 2013-04-15 2013-09-11 浙江大学 Image object counting method based on regular risk minimization
CN106326937A (en) * 2016-08-31 2017-01-11 郑州金惠计算机系统工程有限公司 Convolutional neural network based crowd density distribution estimation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
马海军 等: "《基于卷积神经网络的监控视频人数统计算法》", 《安徽大学学报(自然科学版)》 *

Cited By (12)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20200186743A1 (en) * 2016-11-24 2020-06-11 Hanwha Techwin Co., Ltd. Apparatus and method for displaying images and passenger density
US10841654B2 (en) * 2016-11-24 2020-11-17 Hanwha Techwin Co., Ltd. Apparatus and method for displaying images and passenger density
CN111144188A (en) * 2019-05-07 2020-05-12 王青雷 Wireless notification platform based on big data analysis
CN111079488A (en) * 2019-05-27 2020-04-28 陕西科技大学 Bus passenger flow detection system and method based on deep learning
CN111079488B (en) * 2019-05-27 2023-09-26 广东快通信息科技有限公司 Deep learning-based bus passenger flow detection system and method
CN110210603A (en) * 2019-06-10 2019-09-06 长沙理工大学 Counter model construction method, method of counting and the device of crowd
CN111079540A (en) * 2019-11-19 2020-04-28 北航航空航天产业研究院丹阳有限公司 Target characteristic-based layered reconfigurable vehicle-mounted video target detection method
CN111079540B (en) * 2019-11-19 2024-03-19 北航航空航天产业研究院丹阳有限公司 Hierarchical reconfigurable vehicle-mounted video target detection method based on target characteristics
CN111582778A (en) * 2020-04-17 2020-08-25 上海中通吉网络技术有限公司 Operation site cargo accumulation measuring method, device, equipment and storage medium
CN111582778B (en) * 2020-04-17 2024-04-12 上海中通吉网络技术有限公司 Method, device, equipment and storage medium for measuring accumulation of cargos in operation site
CN111640101A (en) * 2020-05-29 2020-09-08 苏州大学 Ghost convolution characteristic fusion neural network-based real-time traffic flow detection system and method
CN112699741A (en) * 2020-12-10 2021-04-23 广州广电运通金融电子股份有限公司 Method, system and equipment for calculating internal congestion degree of bus

Similar Documents

Publication Publication Date Title
CN109241858A (en) A kind of passenger flow density detection method and device based on rail transit train
AU2020100200A4 (en) Content-guide Residual Network for Image Super-Resolution
Zhang et al. MCnet: Multiple context information segmentation network of no-service rail surface defects
CN107316007B (en) Monitoring image multi-class object detection and identification method based on deep learning
CN104978580B (en) A kind of insulator recognition methods for unmanned plane inspection transmission line of electricity
Lu et al. Adaptive object detection using adjacency and zoom prediction
CN107610123A (en) A kind of image aesthetic quality evaluation method based on depth convolutional neural networks
CN107506722A (en) One kind is based on depth sparse convolution neutral net face emotion identification method
CN109190507A (en) A kind of passenger flow crowding calculation method and device based on rail transit train
CN111369563A (en) Semantic segmentation method based on pyramid void convolutional network
CN103605972B (en) Non-restricted environment face verification method based on block depth neural network
CN109522966A (en) A kind of object detection method based on intensive connection convolutional neural networks
CN108090403A (en) A kind of face dynamic identifying method and system based on 3D convolutional neural networks
CN107871101A (en) A kind of method for detecting human face and device
CN107134144A (en) A kind of vehicle checking method for traffic monitoring
CN103413142B (en) Remote sensing image land utilization scene classification method based on two-dimension wavelet decomposition and visual sense bag-of-word model
CN108537117A (en) A kind of occupant detection method and system based on deep learning
CN107609638A (en) A kind of method based on line decoder and interpolation sampling optimization convolutional neural networks
CN103714326B (en) One-sample face identification method
CN106529605A (en) Image identification method of convolutional neural network model based on immunity theory
CN109993269A (en) Single image people counting method based on attention mechanism
CN103699904A (en) Image computer-aided diagnosis method for multi-sequence nuclear magnetic resonance images
CN108039044A (en) The system and method that Vehicular intelligent based on multiple dimensioned convolutional neural networks is lined up
CN106991666A (en) A kind of disease geo-radar image recognition methods suitable for many size pictorial informations
CN112766283B (en) Two-phase flow pattern identification method based on multi-scale convolution network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190118

RJ01 Rejection of invention patent application after publication