CN111797799A - Subway passenger waiting area planning method based on artificial intelligence - Google Patents

Subway passenger waiting area planning method based on artificial intelligence Download PDF

Info

Publication number
CN111797799A
CN111797799A CN202010671689.6A CN202010671689A CN111797799A CN 111797799 A CN111797799 A CN 111797799A CN 202010671689 A CN202010671689 A CN 202010671689A CN 111797799 A CN111797799 A CN 111797799A
Authority
CN
China
Prior art keywords
hand lever
density
curve
network
image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Withdrawn
Application number
CN202010671689.6A
Other languages
Chinese (zh)
Inventor
张恋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Zhengzhou Angda Information Technology Co ltd
Original Assignee
Zhengzhou Angda Information Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Zhengzhou Angda Information Technology Co ltd filed Critical Zhengzhou Angda Information Technology Co ltd
Priority to CN202010671689.6A priority Critical patent/CN111797799A/en
Publication of CN111797799A publication Critical patent/CN111797799A/en
Withdrawn legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4038Image mosaicing, e.g. composing plane images from plane sub-images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • General Engineering & Computer Science (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)
  • Image Processing (AREA)

Abstract

The invention provides a subway passenger waiting area planning method based on artificial intelligence, which comprises the steps of preprocessing an acquired image and then sending the preprocessed image into a semantic segmentation network to obtain a hand lever semantic segmentation graph, and sending the hand lever semantic segmentation graph into a classification network to obtain a hand lever curve graph; extracting and matching line characteristic points of the hand pull rod curve graph to obtain a transformation matrix, and splicing the acquired images according to the transformation matrix to obtain a panoramic image of the subway carriage; sending the panoramic picture into a detection network, obtaining a crowd density heat picture after encoding and decoding operations, stacking and overlapping the crowd density heat picture to obtain a superposed heat picture, and obtaining the density grade of the subway carriage by a density grading network according to the superposed heat picture; and transmitting the density grade information of the subway carriage to an electronic display screen of the next platform through a wireless network. The method solves the problem that the prior art is easy to have target shielding and missing detection when density estimation is carried out.

Description

Subway passenger waiting area planning method based on artificial intelligence
Technical Field
The invention belongs to the field of artificial intelligence and image processing, and particularly relates to a subway passenger waiting area planning method based on artificial intelligence.
Background
The number of people taking the subway is large, the crowd density of the carriage is large, and the problems of target shielding, missing detection and the like easily occur in the process of directly estimating the crowd density of the image collected by the camera under the condition of large crowd density.
People can slightly move in the carriage, the influence factor is not considered when the crowd density grade is judged in the prior art, the obtained density graph is directly operated to obtain the crowd density grade, the judgment result is inaccurate, and inaccurate guide suggestions are given.
Disclosure of Invention
In order to solve the problems in the prior art, the invention provides a subway passenger waiting area planning method based on artificial intelligence, which comprises the following steps:
acquiring images by utilizing a plurality of cameras in a subway carriage;
secondly, sending the acquired image into a semantic segmentation network after preprocessing, obtaining a feature map through a first encoder, and processing the feature map through a first decoder to obtain a hand lever semantic segmentation map; sending the semantic hand lever segmentation graph into a classification network, acquiring pixel points judged as hand levers in each row of pixels to obtain a hand lever scatter diagram, and fitting the scatter diagram to obtain a hand lever curve graph; the classification network comprises a third encoder and a classification full-link layer, wherein the input of the third encoder is a hand lever semantic segmentation graph, and the hand lever semantic segmentation graph is sent into the classification full-link layer to classify pixels after characteristics are extracted, and specifically, the classification network is divided into a hand lever and other two classes; the number of the classified full-connection layers is the same as the number of pixel lines of the hand lever semantic segmentation graph, and each line of pixels in the image respectively corresponds to one classified full-connection layer;
extracting and matching line characteristic points of the hand pull rod curve graph to obtain a transformation matrix, and splicing images acquired by the camera according to the transformation matrix to obtain a panoramic image of the subway carriage;
step four, the panoramic image is sent into a detection network, a crowd density heat image is obtained after encoding and decoding operation, stacking and overlapping operation is carried out on the crowd density heat image to obtain a superimposed heat image, and the density grading network obtains the density grade of the subway carriage according to the superimposed heat image;
and fifthly, transmitting the density grade information of the subway carriages to an electronic display screen of the next platform through a wireless network, and allowing passengers to select carriage gates with smaller crowding degrees to wait for getting on according to the density grade information of each carriage on the electronic display screen.
The training method of the semantic segmentation network comprises the following steps: selecting images collected by a camera to construct a training data set; labeling the data set, wherein the labeled seat is 1, the hand lever is 2, and the other classes are 3; the network is trained using a cross entropy loss function.
The training method of the classification network comprises the following steps: selecting semantic segmentation graphs containing seats, hand levers and other classes to construct a training data set; marking a hand lever in the semantic segmentation graph as a curve with the width of 1 pixel; the network is trained using a cross entropy loss function.
The fitting of the scatter diagram comprises the following specific steps: randomly selecting N points in the scatter diagram to fit into a curve, calculating the distance from all the points in the scatter diagram to the curve, setting a threshold value, judging the points with the distance less than the threshold value as belonging to the curve, and recording the number of the points belonging to the curve; then randomly selecting N points, fitting a new curve again, and calculating the number of the points belonging to the curve according to the steps; finally, the curve with the largest number of points belonging to the curve is selected as the curve of the hand lever.
The detection network comprises a second encoder and a second decoder, the second encoder extracts features of the panoramic image to obtain a feature image, and the second decoder performs up-sampling and feature extraction on the feature image to obtain a crowd density heat image.
The stacking and superposing operation specifically comprises the following steps: the heat map of the current time is HDThe heat map of the previous time is H0According to the formula
H1=ElementWiseMAX(H0,HD*BStart)
Obtaining a stacked heatmap H1Wherein B isStartA baseline value initialized for the heat value, then according to formula H'1=H1*α+HD(1-alpha) to H1Performing attenuation treatment of thermal accumulation, wherein alpha is attenuation coefficient, H'1Heat maps after overlay.
The density grading network comprises a fourth encoder and a density grading full-connection layer, and the superimposed heat map is subjected to feature extraction through the fourth encoder and then is classified through the density grading full-connection layer.
The invention has the beneficial effects that:
1. according to the invention, image splicing operation is firstly carried out before processing the images, so that target missing detection can be effectively prevented, and the obtained grading result is more accurate by carrying out subsequent operation on the spliced compartment panoramic image.
2. In the prior art, the hand lever curve graph can be obtained only by performing complicated post-processing on the semantic segmentation graph.
3. According to the method, firstly, the hand-pull rod curve graph is subjected to extraction and matching of line characteristic points to obtain a more accurate transformation matrix, then the images collected by the camera are spliced according to the transformation matrix to obtain the panoramic image of the subway carriage, the splicing effect is obviously improved by the line characteristic relative to the point characteristic, meanwhile, the calculated amount in the image splicing process can be effectively reduced, and the splicing effect is more natural.
4. The method carries out stacking and overlapping operation on the obtained crowd density heat map, so that key points at the edge in the image are more prominent, errors of target statistics caused by slight movement of people can be avoided, the position of the people can be accurately obtained, and the problem of missing detection of the target can be effectively prevented.
Drawings
FIG. 1 is an overall framework diagram of the method.
Detailed Description
In order that those skilled in the art will better understand the present invention, a detailed description of the present invention is provided below with reference to the accompanying drawings, and reference is made to fig. 1.
Example (b):
an artificial intelligence-based subway passenger waiting area planning method is shown in fig. 1, and comprises the following steps:
and a plurality of cameras in the subway carriage are utilized to collect images. The number of people taking the subway is large, the crowd density of the carriage is large, and aiming at the condition that the crowd density is large, the problems of target shielding, missing detection and the like easily occur when the crowd density is estimated by adopting a common camera. The fisheye camera is a panoramic camera which can independently realize monitoring without dead angles in a large range, and has a wide shooting angle and a wide visual field range. Therefore, for follow-up more accurate passenger position who acquires every carriage of subway, install three fisheye camera in every carriage in the embodiment, can effectually avoid sheltering from the phenomenon like this, the crowd in the wider acquisition carriage. And then preprocessing the images shot by the fisheye cameras, and splicing the images shot by the multiple cameras in the same carriage to obtain a panoramic image of the subway carriage.
The preprocessing is an aberration correction operation, and preferably, the embodiment adopts a longitude and latitude mapping method, and the specific steps are as follows:
acquiring an optical imaging center of an initial fisheye image to be corrected, and converting image coordinates of the initial fisheye image into image physical coordinates; performing line correction on the initial fisheye image through a preset mapping relation to obtain a longitudinal repair image of the initial fisheye image; performing first rotation operation on the longitudinal restored image to obtain a rotated image, wherein the rotation angle of the first rotation operation is an odd multiple of 90 degrees; performing line correction on the rotation image through a mapping relation to obtain a primary restoration image of an initial fisheye image; and performing second rotation operation on the primary restored image to obtain a target correction image of the initial fisheye image, wherein the angle of the second rotation operation is the same as that of the first rotation operation, and the direction of the second rotation operation is opposite to that of the first rotation operation.
The distortion correction method also comprises a fixed inner diameter method, a fixed outer diameter method, a radial expansion method, a modified version double longitude method and the like, and an implementer can freely select which method is used for the distortion correction of the image.
At this point, the preprocessing of the image is completed.
The method comprises the following steps of firstly obtaining pixel points which are judged as the hand lever in each row of pixels by using a classification network, then fitting scattered points by using a RANSAC method to obtain the curve of the hand lever, and specifically comprises the following steps:
performing semantic segmentation operation on the preprocessed image to realize sensing and extraction of the hand lever and obtain a hand lever semantic segmentation graph, specifically:
the semantic segmentation network comprises a first encoder and a first decoder, and the training method comprises the following steps: selecting images acquired by a camera to construct a training data set, randomly selecting 80% of the data set as the training set, and using the remaining 20% as a verification set; labeling the data set, wherein the labeled seat is 1, the hand lever is 2, and the other classes are 3; the network is trained using a cross entropy loss function.
And sending the preprocessed image into a semantic segmentation network, obtaining a feature map through a first encoder, and decoding the feature map through a first decoder to obtain a hand lever semantic segmentation map.
The hand lever is sensed by adopting a semantic segmentation model, and the obtained result image, namely the hand lever in the semantic segmentation graph of the hand lever, is in an irregular strip shape, has poor continuity and is inconvenient to accurately identify. The invention adds a classification network to realize the post-processing of the image, classifies pixels belonging to the hand lever in each row of pixels according to rows, obtains a scatter diagram through the output of N full-connection layers, and N is the row number of the result image.
Sending the semantic segmentation graph of the hand lever into a classification network, obtaining scattered points required by a fitting hand lever curve, namely obtaining pixel points which are judged as the hand lever in each row of pixels, obtaining a hand lever scattered point graph, fitting the scattered point graph to obtain a hand lever curve, and specifically:
the classification network comprises a third encoder and classification full-connection layers, wherein the number of the classification full-connection layers is the same as the number of pixel lines of the hand lever semantic segmentation graph, and each line of pixels in the image respectively corresponds to one classification full-connection layer; the training method comprises the following steps: selecting semantic segmentation graphs containing seats, hand levers and other classes to construct a training data set, randomly selecting 80% of the data set as the training set, and using the remaining 20% as a verification set; marking the hand lever in the semantic segmentation graph as a curve with the width of 1 pixel, specifically, marking the central point of each row of pixels belonging to the hand lever as a label, marking the central point as 1, and marking the other central points as 0; the network is trained using a cross entropy loss function.
The input of the third encoder is a hand lever semantic segmentation graph, the hand lever semantic segmentation graph is sent to a classification full-connection layer after characteristics are extracted, whether pixel points in a row of pixels are the hand levers or not is judged, a hand lever scatter diagram is obtained, and a RANSAC algorithm is used for fitting the scatter diagram, specifically:
randomly selecting N points in the scatter diagram to fit into a curve, calculating the distance from all the points in the scatter diagram to the curve, setting a threshold value, judging the points with the distance less than the threshold value as belonging to the curve, and recording the number of the points belonging to the curve; then randomly selecting N points, fitting a new curve again, and calculating the number of the points belonging to the curve according to the steps; finally, the curve with the largest number of points belonging to the curve is selected as the curve of the hand lever.
Thus, a hand lever curve graph is obtained.
Extracting characteristic points of a hand pull rod curve graph, namely extracting characteristic points on the hand pull rod curve, matching the characteristic points to obtain a transformation matrix, and splicing images collected by a camera according to the transformation matrix to obtain a panoramic image of the subway carriage; image splicing is carried out based on the well-fitted hand lever curve graph, the promotion of the splicing effect by the line characteristic relative point characteristic is obvious, the calculated amount can be effectively reduced by the operation, meanwhile, a more accurate transformation matrix can be obtained, and the splicing result is more accurate and natural. In particular, the method and process for extracting and matching the line characteristic points are not within the scope of the present invention.
And performing image fusion operation on the spliced image, wherein the embodiment adopts a median filtering method, namely, the median filtering method is utilized to remove points higher than a certain threshold value and eliminate sudden change of pixel values. The implementer may also select other image fusion algorithms such as an average method, a hat function method, a weighted average method, etc.
Sending the panoramic picture into a detection network, obtaining a crowd density heat picture after encoding and decoding operations, stacking and overlapping a plurality of crowd density heat pictures to obtain overlapped heat pictures, and obtaining the density grade of the subway carriage by a density grading network according to the overlapped heat pictures; the method comprises the following specific steps:
the detection network comprises a second encoder and a second decoder, and the training method comprises the following steps: constructing a training data set by using the obtained panoramic image of the subway carriage; making label data, specifically comprising two steps, wherein in the first step, key points are marked at the head position of each person, namely X and Y coordinates; and secondly, convolving the marked head key points with a Gaussian kernel to obtain corresponding hot spots, wherein the selection of specific details such as the size of the Gaussian kernel is out of the discussion range of the invention. In order to detect that the network model can be converged better, the training data set and the label data are normalized, namely the picture matrix is changed into a floating point number between [0,1], and then the floating point number is sent to a second encoder and a second decoder, a mean square error loss function is adopted for training, wherein the second encoder is used for extracting features of input image data to obtain a feature map, and the second decoder is used for performing up-sampling and feature extraction on the feature map to obtain a crowd density heat map with a pixel value range of [0,1 ].
And (4) carrying out post-processing on the crowd density heat map to obtain the positions of the head key points of the crowd in the carriage. Methods for post-processing the heatmap are well known and will not be described in detail herein.
Because the people can move in the carriage, a plurality of crowd density heat maps are stacked and superposed to obtain the moving track of the people, so that key points at the edges in the images are more prominent, the positions of the people can be obtained more accurately, and the problem of missing detection of the targets can be effectively prevented.
Stacking and overlaying the obtained population density heat map, specifically: the heat map of the current time is HDThe heat map of the previous time is H0According to the formula
H1=ElementWiseMAX(H0,HD*BStart)
Obtaining a stacked heatmap H1Wherein, ElementWiseMAXThe operator is operative to take the maximum value of the pixel at the corresponding position in the two heat maps, BStartTo make HDThe heat value is initialized at the higher and highest baseline value, which in the example is 0.75;
then, according to the formula
H′1=H1*α+HD*(1-α)
To H1The thermal deposition was attenuated to eliminate the residual characteristics and to obtain an attenuation coefficient α of 0.975, H'1Heat maps after overlay.
By this, a character flow trace with a fast enough response speed and a residual key point property is obtained.
The accurate position of the person can be accurately obtained according to the superposed heat map, and the error of people counting caused by slight movement of the person can be avoided. The superposition of a plurality of heat maps can effectively prevent the problem of missing detection of the target, and the position of a key point of a person can be acquired more accurately.
And sending the key point data of the superposed heat maps into a density grading network for classification, and classifying the key point data into a plurality of dense grades according to different crowding degrees in the carriage. The embodiment is divided into five grades, namely 1 grade, 2 grade, 3 grade, 4 grade and 5 grade, wherein the higher grade represents that the crowding degree of the carriage is larger, and conversely, the crowding degree of the carriage is smaller, and the density grade can be divided by an implementer.
The density grading network comprises a fourth encoder and a density grading full-connection layer, and the classification of the density grade of the carriage is realized through the density grading full-connection layer after the key point data of the superposed heatmap is subjected to feature extraction through the fourth encoder; the training process comprises the following steps: sending the key point data of the superposed heat map into a fourth encoder for feature extraction, sending a one-dimensional feature vector obtained by the flattening of the extracted features into a density grading full-link layer for classifying the carriage crowding degree, and outputting the probability of each density grade; and training by adopting a cross entropy loss function. And obtaining the density grade of each compartment through argmax operation after the obtained probability of each density grade. The implementer may also choose weighted cross entropy, focallloss, etc. loss functions to solve the sample imbalance problem.
In order to give consideration to the speed and the precision of the network, the semantic segmentation network proposes to adopt a structural design of hop-level connection, and the block adopts the block design of lightweight networks such as Shuffle Net and Mobile Net. The fourth encoder proposes to apply the Efficient Net image classification network to extract features.
Thus, a density level of each degree of congestion of the car is obtained.
The density grade information of the subway carriages is transmitted to the electronic display screen of the next platform through the wireless network, and passengers can select carriage gates with smaller crowding degree to wait for getting on according to the density grade information of each carriage on the electronic display screen.
The above description is intended to provide the skilled person with a better understanding of the present invention and is not intended to limit the present invention.

Claims (6)

1. A subway passenger waiting area planning method based on artificial intelligence is characterized by comprising the following steps:
acquiring images by utilizing a plurality of cameras in a subway carriage;
secondly, sending the acquired image into a semantic segmentation network after preprocessing, obtaining a feature map through a first encoder, and processing the feature map through a first decoder to obtain a hand lever semantic segmentation map; sending the semantic hand lever segmentation graph into a classification network, acquiring pixel points judged as hand levers in each row of pixels to obtain a hand lever scatter diagram, and fitting the scatter diagram to obtain a hand lever curve graph; the classification network comprises a third encoder and a classification full-link layer, wherein the input of the third encoder is a hand lever semantic segmentation graph, and the hand lever semantic segmentation graph is sent into the classification full-link layer to classify pixels after characteristics are extracted, and specifically, the classification network is divided into a hand lever and other two classes; the number of the classified full-connection layers is the same as the number of pixel lines of the hand lever semantic segmentation graph, and each line of pixels in the image respectively corresponds to one classified full-connection layer;
extracting and matching line characteristic points of the hand pull rod curve graph to obtain a transformation matrix, and splicing images acquired by the camera according to the transformation matrix to obtain a panoramic image of the subway carriage;
step four, the panoramic image is sent into a detection network, a crowd density heat image is obtained after encoding and decoding operation, stacking and overlapping operation is carried out on the crowd density heat image to obtain a superimposed heat image, and the density grading network obtains the density grade of the subway carriage according to the superimposed heat image;
and fifthly, transmitting the density grade information of the subway carriages to an electronic display screen of the next platform through a wireless network, and allowing passengers to select carriage gates with smaller crowding degrees to wait for getting on according to the density grade information of each carriage on the electronic display screen.
2. The method of claim 1, wherein the training method of the semantic segmentation network is: selecting images collected by a camera to construct a training data set; labeling the data set, wherein the labeled seat is 1, the hand lever is 2, and the other classes are 3; training the network using a cross entropy loss function;
the training method of the classification network comprises the following steps: selecting semantic segmentation graphs containing seats, hand levers and other classes to construct a training data set; marking a hand lever in the semantic segmentation graph as a curve with the width of 1 pixel; the network is trained using a cross entropy loss function.
3. The method of claim 1, wherein fitting the scatter plot comprises: randomly selecting N points in the scatter diagram to fit into a curve, calculating the distance from all the points in the scatter diagram to the curve, setting a threshold value, judging the points with the distance less than the threshold value as belonging to the curve, and recording the number of the points belonging to the curve; then randomly selecting N points, fitting a new curve again, and calculating the number of the points belonging to the curve according to the steps; finally, the curve with the largest number of points belonging to the curve is selected as the curve of the hand lever.
4. The method of claim 1, wherein the detection network comprises a second encoder and a second decoder, the second encoder performs feature extraction on the panorama to obtain a feature map, and the second decoder performs upsampling and feature extraction on the feature map to obtain a crowd density heat map.
5. The method according to claim 1, characterized in that the stacking and superimposing operation is embodied as: the heat map of the current time is HDThe heat map of the previous time is H0According to the formula
H1=ElementWiseMAX(H0,HD*BStart)
Obtaining a stacked heatmap H1Wherein B isStartA baseline value initialized for the heat value, then according to formula H'1=H1*α+HD(1-alpha) to H1Performing attenuation treatment of thermal accumulation, wherein alpha is attenuation coefficient, H'1Heat maps after overlay.
6. The method of claim 1, wherein the density-graded network comprises a fourth encoder and a density-graded fully-connected layer, and the superimposed heat map is subjected to feature extraction by the fourth encoder and classification of the density grade of the car is achieved by the density-graded fully-connected layer.
CN202010671689.6A 2020-07-13 2020-07-13 Subway passenger waiting area planning method based on artificial intelligence Withdrawn CN111797799A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010671689.6A CN111797799A (en) 2020-07-13 2020-07-13 Subway passenger waiting area planning method based on artificial intelligence

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010671689.6A CN111797799A (en) 2020-07-13 2020-07-13 Subway passenger waiting area planning method based on artificial intelligence

Publications (1)

Publication Number Publication Date
CN111797799A true CN111797799A (en) 2020-10-20

Family

ID=72808514

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010671689.6A Withdrawn CN111797799A (en) 2020-07-13 2020-07-13 Subway passenger waiting area planning method based on artificial intelligence

Country Status (1)

Country Link
CN (1) CN111797799A (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550102A (en) * 2022-03-01 2022-05-27 上海中通吉网络技术有限公司 Cargo accumulation detection method, device, equipment and system
CN115546652A (en) * 2022-11-29 2022-12-30 城云科技(中国)有限公司 Multi-time-state target detection model and construction method, device and application thereof

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114550102A (en) * 2022-03-01 2022-05-27 上海中通吉网络技术有限公司 Cargo accumulation detection method, device, equipment and system
CN115546652A (en) * 2022-11-29 2022-12-30 城云科技(中国)有限公司 Multi-time-state target detection model and construction method, device and application thereof

Similar Documents

Publication Publication Date Title
CN110956094B (en) RGB-D multi-mode fusion personnel detection method based on asymmetric double-flow network
CN109670429B (en) Method and system for detecting multiple targets of human faces of surveillance videos based on instance segmentation
CN108460764B (en) Ultrasonic image intelligent segmentation method based on automatic context and data enhancement
CN109543695B (en) Population-density population counting method based on multi-scale deep learning
CN110599537A (en) Mask R-CNN-based unmanned aerial vehicle image building area calculation method and system
CN112257609B (en) Vehicle detection method and device based on self-adaptive key point heat map
CN112288008B (en) Mosaic multispectral image disguised target detection method based on deep learning
CN104077577A (en) Trademark detection method based on convolutional neural network
Liu et al. A night pavement crack detection method based on image‐to‐image translation
CN111640116B (en) Aerial photography graph building segmentation method and device based on deep convolutional residual error network
CN106897681A (en) A kind of remote sensing images comparative analysis method and system
CN113313031B (en) Deep learning-based lane line detection and vehicle transverse positioning method
CN111797803A (en) Road guardrail abnormity detection method based on artificial intelligence and image processing
CN114022408A (en) Remote sensing image cloud detection method based on multi-scale convolution neural network
CN111797799A (en) Subway passenger waiting area planning method based on artificial intelligence
CN112308087B (en) Integrated imaging identification method based on dynamic vision sensor
CN115063786A (en) High-order distant view fuzzy license plate detection method
CN113139489A (en) Crowd counting method and system based on background extraction and multi-scale fusion network
Zhu et al. Towards automatic wild animal detection in low quality camera-trap images using two-channeled perceiving residual pyramid networks
CN116434088A (en) Lane line detection and lane auxiliary keeping method based on unmanned aerial vehicle aerial image
WO2019228450A1 (en) Image processing method, device, and equipment, and readable medium
WO2022205329A1 (en) Object detection method, object detection apparatus, and object detection system
CN112016518B (en) Crowd distribution form detection method based on unmanned aerial vehicle and artificial intelligence
CN113743300A (en) Semantic segmentation based high-resolution remote sensing image cloud detection method and device
CN115546667A (en) Real-time lane line detection method for unmanned aerial vehicle scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
WW01 Invention patent application withdrawn after publication
WW01 Invention patent application withdrawn after publication

Application publication date: 20201020