CN111368749B - Automatic identification method and system for stair area - Google Patents

Automatic identification method and system for stair area Download PDF

Info

Publication number
CN111368749B
CN111368749B CN202010152534.1A CN202010152534A CN111368749B CN 111368749 B CN111368749 B CN 111368749B CN 202010152534 A CN202010152534 A CN 202010152534A CN 111368749 B CN111368749 B CN 111368749B
Authority
CN
China
Prior art keywords
stair
picture
semantic segmentation
segmentation model
pictures
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202010152534.1A
Other languages
Chinese (zh)
Other versions
CN111368749A (en
Inventor
黄泽
胡太祥
滕安琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Guangzhou Technology Co ltd
Original Assignee
Alnnovation Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Guangzhou Technology Co ltd filed Critical Alnnovation Guangzhou Technology Co ltd
Priority to CN202010152534.1A priority Critical patent/CN111368749B/en
Publication of CN111368749A publication Critical patent/CN111368749A/en
Application granted granted Critical
Publication of CN111368749B publication Critical patent/CN111368749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • Molecular Biology (AREA)
  • Computational Linguistics (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computing Systems (AREA)
  • General Health & Medical Sciences (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic identification method and system for stair areas, which relate to the technical field of computer vision and comprise the following steps: acquiring a stair video in an application scene; the method comprises the steps of obtaining stair pictures of each frame in a stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture; comparing the number of human bodies with a preset number threshold, and when the number of human bodies is smaller than the number threshold, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair region in the stair picture for judging the safety condition of the subsequent stairs. According to the invention, the conventional manual calibration is replaced by automatic identification of the stair region, so that the identification efficiency of the stair region is improved; according to the invention, the stair image features are extracted by using convolution with offset, the mIoU (average cross-over ratio) is improved by 2%, and the accuracy of the prediction result is higher.

Description

Automatic identification method and system for stair area
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic identification method and system for a stair area.
Background
At present, the safety protection method aiming at the stair video scene needs to manually mark the stair area and then judge the personnel situation on the stairs. The method is complicated, and staff is required to learn to use a calibration tool every time the camera is installed or the position of the camera is adjusted, so that a stair area is defined on a video manually. Because the stairs are irregular areas, manually marking stair areas is rough, for example, when passengers are determined to climb stair handrails, fine stair handrail area determination is needed.
The stair calibration method generally needs to divide areas under the condition of no interference, has high requirements on the scenes, and is difficult to apply in real time in subways and markets in some scenes with high people traffic; the number of training samples (batch size) used in the training of the semantic segmentation algorithm of the existing stair is smaller and is usually 1, so that batch normalization (batch normalization) cannot be used for accelerating the training, in addition, the stair belongs to a larger irregular shape, and the shape of the stair is easily influenced by the installation position of a camera.
Disclosure of Invention
The invention aims to provide an automatic identification method and system for a stair area.
To achieve the purpose, the invention adopts the following technical scheme:
the automatic identification method for the stair area comprises the following steps:
step S1, acquiring a stair video in an application scene;
step S2, obtaining stair pictures of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
step S3, comparing the human body quantity with a preset quantity threshold value:
if the number of human bodies is smaller than the number threshold, turning to step S4;
if the number of the human bodies is not smaller than the number threshold, exiting;
and S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair region in the stair picture for judging the safety condition of the subsequent stairs.
As a preferred embodiment of the present invention, the number threshold is 2.
As a preferable scheme of the invention, the method further comprises a process of generating the stair semantic segmentation model in advance, and specifically comprises the following steps:
step A1, collecting stair scene pictures;
a2, marking areas where stairs in the stair scene pictures are located respectively to obtain corresponding stair marking pictures;
step A3, grouping the stair marking pictures to obtain a training set and a testing set;
step A4, training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
and step A5, verifying the stair semantic segmentation model according to each stair marking picture in the test set, and storing the semantic segmentation model meeting a preset verification standard.
As a preferable mode of the present invention, the stair scene pictures include the stair scene picture in the case of no pedestrian and the stair scene picture in the case of pedestrian.
As a preferable mode of the invention, in the stair scene picture in the case of pedestrians, the number of pedestrians is less than 2.
In the step S4, in the process of performing semantic segmentation on the stair image, the semantic segmentation model performs stair region feature extraction by adopting convolution with offset.
In the step A4, in the training process of the stair semantic segmentation model, a batch normalization mode is adopted to accelerate the convergence rate of the stair semantic segmentation model.
An automatic stair area identification system, applying the automatic stair area identification method according to any one of the above, specifically comprising:
the data acquisition module is used for acquiring stair videos in an application scene;
the human body identification module is connected with the data acquisition module and is used for acquiring stair pictures of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
the data comparison module is connected with the human body identification module and is used for comparing the number of human bodies with a preset number threshold value and outputting a corresponding comparison result when the number of human bodies is smaller than the number threshold value;
the stair identification module is connected with the data comparison module and is used for carrying out semantic segmentation on the stair picture according to the comparison result and a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
As a preferred solution of the present invention, the stair recognition system further includes a model generation module connected to the stair recognition module, where the model generation module specifically includes:
the data acquisition unit is used for acquiring stair scene pictures;
the image marking unit is connected with the data acquisition unit and used for marking the areas where the stairs in the stair scene images are positioned respectively to obtain corresponding stair marking images;
the picture grouping unit is connected with the picture marking unit and used for grouping the stair marking pictures to obtain a training set and a testing set;
the model training unit is connected with the picture grouping unit and is used for training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
the model verification unit is respectively connected with the picture grouping unit and the model training unit and is used for verifying the stair semantic segmentation model according to each stair marking picture in the test set and storing the semantic segmentation model meeting a preset verification standard.
The invention has the beneficial effects that:
1) The automatic identification of the stair area replaces the conventional manual calibration, so that the identification efficiency is improved, and the method is more suitable for scenes with larger flow such as markets, subways and the like;
2) In the stair semantic segmentation network, the stair region image features are extracted by using convolution with offset, the mIoU (average intersection ratio of a predicted region and an actual region) is improved by 2%, and the accuracy of the recognition result is higher.
3) The training samples are grouped and normalized in batches, and the convergence rate of model training is obviously improved.
Drawings
In order to more clearly illustrate the technical solution of the embodiments of the present invention, the drawings that are required to be used in the embodiments of the present invention will be briefly described below. It is evident that the drawings described below are only some embodiments of the present invention and that other drawings may be obtained from these drawings without inventive effort for a person of ordinary skill in the art.
Fig. 1 is a flow chart of a method for automatically identifying a stair area according to an embodiment of the invention.
Fig. 2 is a flow chart of a method for automatically identifying a stair area according to an embodiment of the invention.
FIG. 3 is a process of pre-generating a stair semantic segmentation model according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an automatic stair area recognition system according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further described below by the specific embodiments with reference to the accompanying drawings.
Wherein the drawings are for illustrative purposes only and are shown in schematic, non-physical, and not intended to be limiting of the present patent; for the purpose of better illustrating embodiments of the invention, certain elements of the drawings may be omitted, enlarged or reduced and do not represent the size of the actual product; it will be appreciated by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numbers in the drawings of embodiments of the invention correspond to the same or similar components; in the description of the present invention, it should be understood that, if the terms "upper", "lower", "left", "right", "inner", "outer", etc. indicate orientations or positional relationships based on the orientations or positional relationships shown in the drawings, only for convenience in describing the present invention and simplifying the description, rather than indicating or implying that the apparatus or elements being referred to must have a specific orientation, be constructed and operated in a specific orientation, so that the terms describing the positional relationships in the drawings are merely for exemplary illustration and should not be construed as limiting the present patent, and that the specific meaning of the terms described above may be understood by those of ordinary skill in the art according to specific circumstances.
In the description of the present invention, unless explicitly stated and limited otherwise, the term "coupled" or the like should be interpreted broadly, as it may be fixedly coupled, detachably coupled, or integrally formed, as indicating the relationship of components; can be mechanically or electrically connected; can be directly connected or indirectly connected through an intermediate medium, and can be communication between the two parts or interaction relationship between the two parts. The specific meaning of the above terms in the present invention will be understood in specific cases by those of ordinary skill in the art.
Based on the technical problems existing in the prior art, the invention provides an automatic identification method of a stair area, as shown in fig. 1, specifically comprising the following steps:
step S1, acquiring a stair video in an application scene;
step S2, stair pictures of each frame in the stair video are obtained, and human body identification is carried out on each stair picture respectively, so that the number of human bodies in each stair picture is obtained;
step S3, comparing the number of human bodies with a preset number threshold value:
if the number of the human bodies is smaller than the number threshold, turning to the step S4;
if the number of the human bodies is not less than the number threshold, exiting;
and S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair region in the stair picture for judging the safety condition of the subsequent stairs.
Specifically, in the embodiment, the stair area is automatically identified by the semantic segmentation method, so that the pixel level classification effect is achieved. The method is suitable for segmenting scenes such as stairs, escalators and the like, can avoid errors of areas when cameras shake, change positions and the like, can adaptively adjust the areas of the stairs, and has good robustness. Through training a small amount of people as a part of the stairs, the stair area can be well segmented under the condition that a small amount of people shelter from the elevator, the stair area is suitable for scenes with larger flow, and the stair area is guaranteed to be updated in real time. The stair detection generally uses a rectangular frame, the features in the stair are extracted by convolution, more background information is mixed aiming at the irregular large target of the stair, and the requirement on the data volume is higher. Meanwhile, the invention provides a method for accelerating training convergence, which optimizes training speed by respectively normalizing the grouping and normalizing the convolution weight.
As shown in fig. 2, the automatic identification process of the stair area of the present invention includes: firstly, stair pictures in scenes such as subways and shopping malls are collected, wherein the stair pictures are mainly stair pictures with sparse people, and in the embodiment, the sparse people are the fewer people, preferably, the number of pedestrians on stairs is less than two people; labeling the acquired stair region pictures, and dividing the labeled pictures into a training set and a testing set; constructing a stair semantic segmentation model, and starting training; and verifying in the test set, and storing the stair semantic segmentation model. Accessing a stair video under each scene, carrying out head detection on each frame of picture, and under the condition that head data are less (less than two people), carrying out semantic segmentation on the stairs to obtain the positioning of stair areas, and preparing for the judgment of the safety condition on the follow-up stairs, wherein the condition that the head data are less is preferable that the head number is less than two people.
As a preferred embodiment of the present invention, the threshold number is 2.
As a preferable scheme of the invention, the method further comprises a process of generating a stair semantic segmentation model in advance, as shown in FIG. 3, and specifically comprises the following steps:
step A1, collecting stair scene pictures;
a2, marking areas where stairs in each stair scene picture are located respectively to obtain corresponding stair marking pictures;
step A3, grouping the stair marking pictures to obtain a training set and a testing set;
step A4, training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
and step A5, verifying the stair semantic segmentation model according to each stair marking picture in the test set, and storing the semantic segmentation model meeting the preset verification standard.
Specifically, in the embodiment, by adding a small number of people as a part of the escalator to the segmentation task, the problem of real-time update of scenes with large people flow can be relieved, and the method has good adaptability to other scenes.
As a preferable mode of the present invention, the stair scene pictures include a stair scene picture in the case of no pedestrian and a stair scene picture in the case of a pedestrian.
As a preferable scheme of the invention, in the stair scene picture under the condition of pedestrians, the number of pedestrians is less than 2.
In step S4, in a process of performing semantic segmentation on the stair image, the semantic segmentation model performs stair region feature extraction by adopting convolution with offset.
In particular, in this embodiment, in the stair segmentation scene, due to the difference of the angles of the installation cameras, the effect photographed by the stair is deformed, and in the conventional convolution, including pooling, the operations basically obtain rectangular frames, which has a very large limitation in modeling the irregular object such as the stair. The lack of a policy basis for determining deformation, such as a fixed box, is very stiff for flexible objects, and for example, different positions of the feature map of the same layer may correspond to objects of different shapes, but all calculate with the same convolution.
To solve the above-described technical problem, in the present embodiment, the convolution operation is not performed in a prescribed rectangular frame of 3×3, but a certain offset is added.
Namely, the calculation formula is as follows:
Figure BDA0002402956190000061
wherein x and y are used to represent feature map (feature map) of the convolution operation output, p 0 And p n For indicating the position of the stairway area in the profile, w (p n ) For indicating position p n And (5) corresponding weight.
The adjustment is as follows:
Figure BDA0002402956190000062
wherein x and y are used to represent feature map (feature map) of the convolution operation output, p 0 And p n For indicating the position of the stairway area in the profile, w (p n ) For indicating position p n Corresponding weights, Δp n For indicating the offset.
The stair belongs to a larger irregular shape, and the shape of the stair is easily influenced by the installation position of the camera, so that the offset is added in the convolution operation, the receptive field is much larger, and meanwhile, the capability of modeling the deformation and the dimension of an object is stronger.
In the step A4, the convergence speed of the stair semantic segmentation model is accelerated by adopting a batch normalization mode in the training process of the stair semantic segmentation model.
Specifically, in the present embodiment, during the semantic segmentation training of the staircase, the training sample (batch size) is often set to 1 only due to the limitations of the equipment and algorithm, and BN (batch normalization) is generally set to 32, so BN does not play a role in the semantic segmentation of the staircase. BN is a normalization method commonly used in deep learning, and plays a significant role in improving training and convergence speed.
In order to solve the above technical problem, in this embodiment, by dividing the image channels (channels) into a plurality of groups (groups), normalizing each group, and changing the dimension of the feature image (feature) from [ N, C, H, W ] to [ N, G, C// G, H, W ], the normalized dimension is [ C// G, H, W ], the convergence speed of the stair semantic segmentation model can be effectively improved, where N represents the number of pictures of the feature image, G represents the number of groups of image channels, C represents the number of image channels, H represents the height of pictures of the feature image, and W represents the width of pictures of the feature image.
Further, in this embodiment, the common normalization manner is considered from the input of the activation function, and the input of the activation function is normalized in different manners, so that the present invention directly processes the convolved weight, and the influence on the speed is more visual. Specifically, the processing formula of the convolved weight is as follows:
Figure BDA0002402956190000063
wherein,,
Figure BDA0002402956190000071
wherein,,
Figure BDA0002402956190000072
processing result for representing convolution weight, W i,j Convolution weights, μw, for representing row i and column j i,. Mean value, sigma w, for representing all convolution weights i,. For representing the variance of all convolution weights, epsilon for representing the deviation value, and I for representing the product of the number of channels and the number of convolution kernels.
An automatic stair area identification system, which applies the stair area automatic identification method according to any one of the above, as shown in fig. 4, specifically includes:
the data acquisition module 1 is used for acquiring stair videos in an application scene;
the human body identification module 2 is connected with the data acquisition module 1 and is used for acquiring stair pictures of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
the data comparison module 3 is connected with the human body identification module 2 and is used for comparing the number of human bodies with a preset number threshold value and outputting a corresponding comparison result when the number of human bodies is smaller than the number threshold value;
the stair identification module 4 is connected with the data comparison module 3 and is used for carrying out semantic segmentation on the stair picture according to the comparison result and a pre-generated stair semantic segmentation model to obtain a stair region in the stair picture for judging the safety condition of the subsequent stairs.
As a preferred embodiment of the present invention, the stair recognition module further includes a model generating module 5, connected to the stair recognition module 4, where the model generating module 5 specifically includes:
a data acquisition unit 51, configured to acquire a stair scene picture;
the image marking unit 52 is connected with the data acquisition unit 51 and is used for marking the areas where the stairs in the scene images of each stair are located respectively to obtain corresponding stair marking images;
the picture grouping unit 53 is connected with the picture marking unit 52 and is used for grouping the stair marking pictures to obtain a training set and a test set;
the model training unit 54 is connected with the picture grouping unit 53 and is used for training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
the model verification unit 55 is respectively connected with the picture grouping unit 53 and the model training unit 54, and is used for verifying the stair semantic segmentation model according to each stair marking picture in the test set, and storing the semantic segmentation model meeting the preset verification standard.
It should be understood that the above description is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be apparent to those skilled in the art that various modifications, equivalents, variations, and the like can be made to the present invention. However, such modifications are intended to fall within the scope of the present invention without departing from the spirit of the present invention. In addition, some terms used in the specification and claims of the present application are not limiting, but are merely for convenience of description.

Claims (7)

1. An automatic identification method for a stair area is characterized by comprising the following steps:
step S1, acquiring a stair video in an application scene;
step S2, obtaining stair pictures of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
step S3, comparing the human body quantity with a preset quantity threshold value:
if the number of human bodies is smaller than the number threshold, turning to step S4;
if the number of the human bodies is not smaller than the number threshold, exiting;
and S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair region in the stair picture for judging the safety condition of the subsequent stairs.
2. The method of automatic identification of stair areas according to claim 1, wherein the number threshold is 2.
3. The method for automatically identifying a stair region according to claim 1, further comprising a process of pre-generating the stair semantic segmentation model, specifically comprising:
step A1, collecting stair scene pictures;
a2, marking areas where stairs in the stair scene pictures are located respectively to obtain corresponding stair marking pictures;
step A3, grouping the stair marking pictures to obtain a training set and a testing set;
step A4, training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
and step A5, verifying the stair semantic segmentation model according to each stair marking picture in the test set, and storing the semantic segmentation model meeting a preset verification standard.
4. A method of automatically identifying a stair region according to claim 3, wherein the stair scene pictures include the stair scene picture without a pedestrian and the stair scene picture with a pedestrian.
5. The automatic stair area identification method according to claim 4, wherein the number of pedestrians in the stair scene picture in the case of pedestrians is less than 2.
6. The automatic stair region identification method according to claim 3, wherein in the step S4, in the process of performing semantic segmentation on the stair image, the semantic segmentation model performs stair region feature extraction by using convolution with offset.
7. The automatic stair region identification method according to claim 6, wherein in the step A4, a batch normalization mode is adopted to accelerate convergence speed of the stair semantic segmentation model in the training process of the stair semantic segmentation model.
CN202010152534.1A 2020-03-06 2020-03-06 Automatic identification method and system for stair area Active CN111368749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010152534.1A CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010152534.1A CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Publications (2)

Publication Number Publication Date
CN111368749A CN111368749A (en) 2020-07-03
CN111368749B true CN111368749B (en) 2023-06-13

Family

ID=71211779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010152534.1A Active CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Country Status (1)

Country Link
CN (1) CN111368749B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529903B (en) * 2021-02-03 2022-01-28 德鲁动力科技(成都)有限公司 Stair height and width visual detection method and device and robot dog
CN113469059A (en) * 2021-07-02 2021-10-01 智能移动机器人(中山)研究院 Stair identification method based on binocular vision
CN114494607A (en) * 2022-02-17 2022-05-13 佛山市速康座椅电梯科技有限公司 Method for three-dimensional forming by photographing measurement

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683770A (en) * 2015-03-12 2015-06-03 杨明 Campus teaching building security monitoring system and method based on image recognition
WO2018125914A1 (en) * 2016-12-31 2018-07-05 Vasuyantra Corp., A Delaware Corporation Method and device for visually impaired assistance
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
CN110207704A (en) * 2019-05-21 2019-09-06 南京航空航天大学 A kind of pedestrian navigation method based on the identification of architectural stair scene intelligent
CN110705366A (en) * 2019-09-07 2020-01-17 创新奇智(广州)科技有限公司 Real-time human head detection method based on stair scene
KR102080532B1 (en) * 2019-09-17 2020-02-24 영남대학교 산학협력단 Apparatus and method for floor identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683770A (en) * 2015-03-12 2015-06-03 杨明 Campus teaching building security monitoring system and method based on image recognition
WO2018125914A1 (en) * 2016-12-31 2018-07-05 Vasuyantra Corp., A Delaware Corporation Method and device for visually impaired assistance
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
CN110207704A (en) * 2019-05-21 2019-09-06 南京航空航天大学 A kind of pedestrian navigation method based on the identification of architectural stair scene intelligent
CN110705366A (en) * 2019-09-07 2020-01-17 创新奇智(广州)科技有限公司 Real-time human head detection method based on stair scene
KR102080532B1 (en) * 2019-09-17 2020-02-24 영남대학교 산학협력단 Apparatus and method for floor identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于卷积神经网络的楼梯识别;顾昊 等;《图形图像》;第1-4页 *

Also Published As

Publication number Publication date
CN111368749A (en) 2020-07-03

Similar Documents

Publication Publication Date Title
CN111368749B (en) Automatic identification method and system for stair area
CN108805093B (en) Escalator passenger tumbling detection method based on deep learning
CN110765964B (en) Method for detecting abnormal behaviors in elevator car based on computer vision
CN109819208B (en) Intensive population security monitoring management method based on artificial intelligence dynamic monitoring
US20190180135A1 (en) Pixel-level based micro-feature extraction
CN105260705B (en) A kind of driver's making and receiving calls behavioral value method suitable under multi-pose
CN111241975B (en) Face recognition detection method and system based on mobile terminal edge calculation
CN111144247A (en) Escalator passenger reverse-running detection method based on deep learning
US20060115157A1 (en) Image processing device, image device, image processing method
CN111126399A (en) Image detection method, device and equipment and readable storage medium
CN108288047A (en) A kind of pedestrian/vehicle checking method
CN103679215B (en) The video frequency monitoring method of the groupment behavior analysiss that view-based access control model big data drives
CN112836667B (en) Method for judging falling and reverse running of passengers going upstairs escalator
KR101246120B1 (en) A system for recognizing license plate using both images taken from front and back faces of vehicle
CN107483894A (en) Judge to realize the high ferro station video monitoring system of passenger transportation management based on scene
CN105938551A (en) Video data-based face specific region extraction method
CN111583170A (en) Image generation device and image generation method
CN115303901B (en) Elevator traffic flow identification method based on computer vision
JP2599701B2 (en) Elevator Standby Passenger Number Detection Method
CN113033482A (en) Traffic sign detection method based on regional attention
CN109117723A (en) Blind way detection method based on color mode analysis and semantic segmentation
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN109919068B (en) Real-time monitoring method for adapting to crowd flow in dense scene based on video analysis
CN115661757A (en) Automatic detection method for pantograph arcing
KR102215565B1 (en) Apparatus and method for detecting human behavior in escalator area

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant