CN111368749A - Automatic identification method and system for stair area - Google Patents

Automatic identification method and system for stair area Download PDF

Info

Publication number
CN111368749A
CN111368749A CN202010152534.1A CN202010152534A CN111368749A CN 111368749 A CN111368749 A CN 111368749A CN 202010152534 A CN202010152534 A CN 202010152534A CN 111368749 A CN111368749 A CN 111368749A
Authority
CN
China
Prior art keywords
stair
picture
semantic segmentation
segmentation model
area
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010152534.1A
Other languages
Chinese (zh)
Other versions
CN111368749B (en
Inventor
黄泽
胡太祥
滕安琪
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Alnnovation Guangzhou Technology Co ltd
Original Assignee
Alnnovation Guangzhou Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Alnnovation Guangzhou Technology Co ltd filed Critical Alnnovation Guangzhou Technology Co ltd
Priority to CN202010152534.1A priority Critical patent/CN111368749B/en
Publication of CN111368749A publication Critical patent/CN111368749A/en
Application granted granted Critical
Publication of CN111368749B publication Critical patent/CN111368749B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Data Mining & Analysis (AREA)
  • General Physics & Mathematics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Software Systems (AREA)
  • Mathematical Physics (AREA)
  • Computing Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Evolutionary Biology (AREA)
  • Multimedia (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses an automatic identification method and system for a stair area, which relate to the technical field of computer vision and comprise the following steps: acquiring a stair video in an application scene; acquiring a stair picture of each frame in a stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture; comparing the number of human bodies with a preset number threshold, and performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model when the number of human bodies is smaller than the number threshold to obtain a stair area in the stair picture for judging the safety condition of a follow-up stair. According to the invention, the automatic identification of the stair area replaces the traditional manual calibration, so that the identification efficiency of the stair area is improved; the method extracts the stair image features by using the convolution with the offset, improves the mIoU (average cross-over ratio) by 2%, and has higher accuracy of a prediction result.

Description

Automatic identification method and system for stair area
Technical Field
The invention relates to the technical field of computer vision, in particular to an automatic identification method and system for a stair area.
Background
At present, a safety protection method for a stair video scene needs to manually scale a stair area and judge the personnel condition on the stair. The method is complicated, and workers need to learn to use a calibration tool to manually define the stair area on the video every time the camera is installed or the position of the camera is adjusted. Because the stairs are irregular areas, the manually calibrated stair areas are rough, and for example, when the passengers are judged to climb the stair handrails, the fine judgment of the stair handrail area is needed.
The calibration method of the stairs generally needs to perform region segmentation under the condition of no human interference, which has higher requirements on the scene and is difficult to be applied in real time in some scenes with larger pedestrian flow compared with subways and shopping malls; the training sample number (batch size) used during training of the semantic segmentation algorithm of the stair at present is small and is usually 1, so that batch normalization cannot be used for accelerating training, the stair is in a large irregular shape, and the shape of the stair is easily influenced by the installation position of a camera.
Disclosure of Invention
The invention aims to provide an automatic identification method and system for a stair area.
In order to achieve the purpose, the invention adopts the following technical scheme:
the automatic identification method of the stair area specifically comprises the following steps:
step S1, obtaining a stair video in an application scene;
step S2, obtaining a stair picture of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
step S3, comparing the number of human bodies with a preset number threshold:
if the number of the human bodies is smaller than the number threshold, turning to step S4;
if the number of the human bodies is not less than the number threshold, quitting;
and step S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
As a preferable aspect of the present invention, the number threshold is 2.
As a preferred scheme of the present invention, the method further includes a process of generating the stair semantic segmentation model in advance, specifically including:
step A1, collecting stair scene pictures;
step A2, respectively labeling the stair areas in each stair scene picture to obtain corresponding stair labeled pictures;
step A3, grouping each stair marking picture to obtain a training set and a test set;
step A4, training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
step A5, verifying the stair semantic segmentation model according to each stair labeling picture in the test set, and storing the semantic segmentation model meeting the preset verification standard.
As a preferred aspect of the present invention, the stair scene picture includes the stair scene picture in the case of no pedestrian and the stair scene picture in the case of a pedestrian.
As a preferable scheme of the present invention, in the stair scene picture in the presence of pedestrians, the number of pedestrians is less than 2.
As a preferable aspect of the present invention, in the step S4, in the process of performing semantic segmentation on the stair picture, the semantic segmentation model performs stair region feature extraction by using convolution with offset.
As a preferable scheme of the present invention, in the step a4, in the training process of the stair semantic segmentation model, a batch normalization manner is adopted to accelerate the convergence rate of the stair semantic segmentation model.
An automatic identification system for a stair area, which applies any one of the above automatic identification methods for a stair area, the automatic identification system for a stair area specifically comprises:
the data acquisition module is used for acquiring a stair video in an application scene;
the human body identification module is connected with the data acquisition module and used for acquiring the stair pictures of each frame in the stair video and respectively identifying the human bodies of the stair pictures to obtain the number of the human bodies in each stair picture;
the data comparison module is connected with the human body identification module and used for comparing the number of the human bodies with a preset number threshold value and outputting a corresponding comparison result when the number of the human bodies is smaller than the number threshold value;
and the stair identification module is connected with the data comparison module and is used for performing semantic segmentation on the stair picture according to the comparison result and a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
As a preferred embodiment of the present invention, the system further includes a model generation module, connected to the stair identification module, where the model generation module specifically includes:
the data acquisition unit is used for acquiring stair scene pictures;
the image marking unit is connected with the data acquisition unit and is used for marking the region where the stairs are located in each stair scene image respectively to obtain corresponding stair marking images;
the picture grouping unit is connected with the picture marking unit and is used for grouping each stair marking picture to obtain a training set and a test set;
the model training unit is connected with the picture grouping unit and used for training according to each stair labeling picture in the training set to obtain a stair semantic segmentation model;
and the model verification unit is respectively connected with the picture grouping unit and the model training unit and is used for verifying the stair semantic segmentation model according to each stair labeling picture in the test set and storing the semantic segmentation model meeting the preset verification standard.
The invention has the beneficial effects that:
1) the automatic identification of the stair area replaces the traditional manual calibration, so that the identification efficiency is improved, and the method is more suitable for scenes with large flow, such as shopping malls, subways and the like;
2) in a stair semantic segmentation network, the image features of stair areas are extracted by using convolution with offset, mIoU (average intersection ratio of a prediction area to an actual area) is improved by 2%, and the accuracy of an identification result is higher.
3) The training samples are grouped and normalized in batches, and the convergence rate of model training is obviously improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings required to be used in the embodiments of the present invention will be briefly described below. It is obvious that the drawings described below are only some embodiments of the invention, and that for a person skilled in the art, other drawings can be derived from them without inventive effort.
Fig. 1 is a schematic flow chart of an automatic identification method for a stair area according to an embodiment of the present invention.
Fig. 2 is a flow chart of an automatic identification method for a stair area according to an embodiment of the present invention.
Fig. 3 is a process of generating a stair semantic segmentation model in advance according to an embodiment of the present invention.
Fig. 4 is a schematic structural diagram of an automatic identification system for a stair area according to an embodiment of the present invention.
Detailed Description
The technical scheme of the invention is further explained by the specific implementation mode in combination with the attached drawings.
Wherein the showings are for the purpose of illustration only and are shown by way of illustration only and not in actual form, and are not to be construed as limiting the present patent; to better illustrate the embodiments of the present invention, some parts of the drawings may be omitted, enlarged or reduced, and do not represent the size of an actual product; it will be understood by those skilled in the art that certain well-known structures in the drawings and descriptions thereof may be omitted.
The same or similar reference numerals in the drawings of the embodiments of the present invention correspond to the same or similar components; in the description of the present invention, it should be understood that if the terms "upper", "lower", "left", "right", "inner", "outer", etc. are used for indicating the orientation or positional relationship based on the orientation or positional relationship shown in the drawings, it is only for convenience of description and simplification of description, but it is not indicated or implied that the referred device or element must have a specific orientation, be constructed in a specific orientation and be operated, and therefore, the terms describing the positional relationship in the drawings are only used for illustrative purposes and are not to be construed as limitations of the present patent, and the specific meanings of the terms may be understood by those skilled in the art according to specific situations.
In the description of the present invention, unless otherwise explicitly specified or limited, the term "connected" or the like, if appearing to indicate a connection relationship between the components, is to be understood broadly, for example, as being fixed or detachable or integral; can be mechanically or electrically connected; they may be directly connected or indirectly connected through intervening media, or may be connected through one or more other components or may be in an interactive relationship with one another. The specific meanings of the above terms in the present invention can be understood in specific cases to those skilled in the art.
Based on the technical problems in the prior art, the invention provides an automatic identification method of a stair area, which specifically comprises the following steps as shown in fig. 1:
step S1, obtaining a stair video in an application scene;
step S2, obtaining a stair picture of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
step S3, comparing the number of human bodies with a preset number threshold:
if the number of the human bodies is less than the number threshold, turning to step S4;
if the number of the human bodies is not less than the number threshold value, quitting;
and step S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
Specifically, in this embodiment, the stair area is automatically identified by the semantic segmentation method, so as to achieve the pixel-level segmentation effect. The method is suitable for segmenting scenes such as stairs and escalators, can avoid the occurrence of region errors of the camera under the conditions of shaking, position changing and the like, can adaptively adjust the region of the stairs, and has good robustness. Through training as a few people partly stair, can be fine at the segmentation stair region of the condition that a few people sheltered from the elevator, be applicable to the great scene of flow, guarantee to update the stair region in real time. The stair detection generally uses a rectangular frame, the features in the rectangular frame are extracted by convolution, more background information can be mixed aiming at an irregular large target such as a stair, the requirement on data volume is high, the features extracted in the method are that a certain offset is added to the convolution frame, so that more effective foreground features are extracted, and the method is beneficial to subsequent segmentation tasks. Meanwhile, the invention provides a method for accelerating the training convergence, which optimizes the training speed from two angles of grouping normalization and convolution weight normalization respectively.
As shown in fig. 2, the automatic identification process of the stair area of the present invention includes: firstly, stair pictures in the scenes of subways, shopping malls and the like are collected, the stair pictures mainly comprise the stair pictures with few people, and in the embodiment, the preferred stair pictures with few people are that the number of pedestrians on the stairs is less than two; marking the acquired images of the stair area, and dividing the marked images into a training set and a test set; building a stair semantic segmentation model and starting training; and verifying in the test set, and storing the stair semantic segmentation model. And accessing the stair videos in each scene, performing head detection on each frame of picture, performing semantic segmentation on the stair to obtain the positioning of the stair area under the condition that the head data are less (less than two persons), and preparing for judging the safety condition on the subsequent stair, wherein the condition that the head data are less preferably that the number of the heads is less than two persons.
As a preferred aspect of the present invention, the number threshold is 2.
As a preferred embodiment of the present invention, the method further includes a process of generating a stair semantic segmentation model in advance, as shown in fig. 3, specifically including:
step A1, collecting stair scene pictures;
step A2, respectively labeling the stair areas in each stair scene picture to obtain corresponding stair labeled pictures;
a3, grouping the stair labeling pictures to obtain a training set and a test set;
step A4, training according to each stair label picture in the training set to obtain a stair semantic segmentation model;
and A5, verifying the stair semantic segmentation model according to each stair labeling picture in the test set, and storing the semantic segmentation model meeting the preset verification standard.
Specifically, in the embodiment, a small number of characters are used as a part of the escalator to be added into the segmentation task, so that the problem of real-time updating of scenes with large pedestrian volume can be solved, and the method has good adaptability to other scenes.
As a preferred scheme of the present invention, the stair scene picture includes a stair scene picture in the case of no pedestrian and a stair scene picture in the case of a pedestrian.
As a preferable scheme of the present invention, in the picture of the stair scene with pedestrians, the number of pedestrians is less than 2.
As a preferred embodiment of the present invention, in step S4, in the process of performing semantic segmentation on the stair image, the semantic segmentation model performs stair region feature extraction by using convolution with offset.
In particular, in the embodiment, in the stair segmentation scene, due to the difference of the installation angles of the cameras, the effect of the stair shot can be deformed, and in the conventional convolution including the operations of pooling, rectangular frames are basically obtained, so that the stair modeling method has a very large limitation when modeling an irregular target such as a stair. There is no policy basis for determining deformation, such as that a fixed box is very stiff for a flexible object, and then different positions of the feature map like a layer may correspond to objects of different shapes, but all calculate with the same convolution.
To solve the above problem, in this embodiment, the convolution operation is not performed in a rectangular frame of 3 × 3, but a certain offset is added.
Namely, the formula is calculated:
Figure BDA0002402956190000061
where x and y are used to represent the feature map, p, of the convolution operation output0And pnFor indicating the position of the stair area in the characteristic diagram, w (p)n) For indicating position pnThe corresponding weight.
The adjustment is as follows:
Figure BDA0002402956190000062
where x and y are used to represent the feature map, p, of the convolution operation output0And pnFor indicating the position of the stair area in the characteristic diagram, w (p)n) For indicating position pnCorresponding weight, Δ pnFor indicating the offset.
The stair is in a large irregular shape, the shape of the stair is easily influenced by the installation position of the camera, so that the convolution operation is added with offset, the receptive field of the stair is large, and the deformation and scale modeling capability of an object is strong.
As a preferred scheme of the present invention, in step a4, in the training process of the stair semantic segmentation model, a batch normalization manner is adopted to accelerate the convergence rate of the stair semantic segmentation model.
Specifically, in this embodiment, in the training process of semantic segmentation of the staircase, due to the limitations of the apparatus and the algorithm, the training sample (batch size) is often set to only 1, and the size of the BN (batch normalization) is generally set to 32, so that the BN does not play a role in the semantic segmentation of the staircase. BN is a normalization method commonly used in deep learning, and plays an important role in improving training and convergence speed.
In order to solve the above technical problem, in this embodiment, the image channels (channels) are divided into a plurality of groups (groups), each group is normalized, and the dimension of the feature map (feature) is first changed from [ N, C, H, W ] to [ N, G, C// G, H, W ], where the normalized dimension is [ C// G, H, W ], so that the convergence speed of the stair semantic segmentation model can be effectively increased, where N represents the number of pictures of the feature map, G represents the number of groups of image channels, C represents the number of image channels, H represents the picture height of the feature map, and W represents the picture width of the feature map.
Further, in this embodiment, common normalization methods are considered from the input of the activation function, and the input of the activation function is normalized in different ways, so that the present invention directly processes the weight of the convolution, and the influence on the speed is more intuitive. Specifically, the processing formula of the convolved weight is as follows:
Figure BDA0002402956190000063
wherein the content of the first and second substances,
Figure BDA0002402956190000071
wherein the content of the first and second substances,
Figure BDA0002402956190000072
for representing the result of the convolution weight, Wi,jConvolution weights, μ w, for representing ith row and jth columni,.Mean, σ w, for representing all convolution weightsi,.The variance of all convolution weights is represented, epsilon is used to represent the offset value, and I is used to represent the product of the number of channels and the number of convolution kernels.
An automatic identification system for a stair area is provided, which applies any one of the above automatic identification methods for a stair area, as shown in fig. 4, and specifically includes:
the data acquisition module 1 is used for acquiring a stair video in an application scene;
the human body identification module 2 is connected with the data acquisition module 1 and is used for acquiring the stair pictures of each frame in the stair video and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
the data comparison module 3 is connected with the human body identification module 2 and used for comparing the number of human bodies with a preset number threshold value and outputting a corresponding comparison result when the number of human bodies is smaller than the number threshold value;
and the stair identification module 4 is connected with the data comparison module 3 and is used for performing semantic segmentation on the stair picture according to the comparison result and a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
As a preferred scheme of the present invention, the system further includes a model generation module 5 connected to the stair identification module 4, and the model generation module 5 specifically includes:
the data acquisition unit 51 is used for acquiring stair scene pictures;
the image labeling unit 52 is connected to the data acquisition unit 51, and is configured to label the areas where the stairs in each stair scene image are located, respectively, to obtain corresponding stair labeling images;
the picture grouping unit 53 is connected with the picture marking unit 52 and is used for grouping the stair marking pictures to obtain a training set and a test set;
the model training unit 54 is connected with the picture grouping unit 53 and used for training according to each stair labeling picture in the training set to obtain a stair semantic segmentation model;
and the model verification unit 55 is respectively connected with the picture grouping unit 53 and the model training unit 54, and is used for verifying the stair semantic segmentation model according to each stair labeling picture in the test set and storing the semantic segmentation model meeting the preset verification standard.
It should be understood that the above-described embodiments are merely preferred embodiments of the invention and the technical principles applied thereto. It will be understood by those skilled in the art that various modifications, equivalents, changes, and the like can be made to the present invention. However, such variations are within the scope of the invention as long as they do not depart from the spirit of the invention. In addition, certain terms used in the specification and claims of the present application are not limiting, but are used merely for convenience of description.

Claims (9)

1. An automatic identification method for a stair area is characterized by comprising the following steps:
step S1, obtaining a stair video in an application scene;
step S2, obtaining a stair picture of each frame in the stair video, and respectively carrying out human body identification on each stair picture to obtain the number of human bodies in each stair picture;
step S3, comparing the number of human bodies with a preset number threshold:
if the number of the human bodies is smaller than the number threshold, turning to step S4;
if the number of the human bodies is not less than the number threshold, quitting;
and step S4, performing semantic segmentation on the stair picture according to a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
2. The method of claim 1, wherein the quantity threshold is 2.
3. The method for automatically identifying a stair area according to claim 1, further comprising a process of pre-generating the stair semantic segmentation model, specifically comprising:
step A1, collecting stair scene pictures;
step A2, respectively labeling the stair areas in each stair scene picture to obtain corresponding stair labeled pictures;
step A3, grouping each stair marking picture to obtain a training set and a test set;
step A4, training according to each stair marking picture in the training set to obtain a stair semantic segmentation model;
step A5, verifying the stair semantic segmentation model according to each stair labeling picture in the test set, and storing the semantic segmentation model meeting the preset verification standard.
4. The method of claim 3, wherein the stair scene pictures comprise the stair scene picture without a pedestrian and the stair scene picture with a pedestrian.
5. The method according to claim 4, wherein the number of pedestrians in the stair scene picture with pedestrians is less than 2.
6. The method according to claim 3, wherein in the step S4, in the process of semantic segmentation of the stair picture, the semantic segmentation model performs stair region feature extraction by using convolution with offset.
7. The method for automatically identifying a stair area according to claim 6, wherein in the step A4, a batch normalization method is adopted to accelerate a convergence rate of the stair semantic segmentation model during training of the stair semantic segmentation model.
8. An automatic identification system for a stair area, which is characterized in that the automatic identification method for a stair area according to any one of claims 1 to 7 is applied, and the automatic identification system for a stair area specifically comprises:
the data acquisition module is used for acquiring a stair video in an application scene;
the human body identification module is connected with the data acquisition module and used for acquiring the stair pictures of each frame in the stair video and respectively identifying the human bodies of the stair pictures to obtain the number of the human bodies in each stair picture;
the data comparison module is connected with the human body identification module and used for comparing the number of the human bodies with a preset number threshold value and outputting a corresponding comparison result when the number of the human bodies is smaller than the number threshold value;
and the stair identification module is connected with the data comparison module and is used for performing semantic segmentation on the stair picture according to the comparison result and a pre-generated stair semantic segmentation model to obtain a stair area in the stair picture for judging the safety condition of a subsequent stair.
9. The system according to claim 8, further comprising a model generation module coupled to the stair identification module, wherein the model generation module comprises:
the data acquisition unit is used for acquiring stair scene pictures;
the image marking unit is connected with the data acquisition unit and is used for marking the region where the stairs are located in each stair scene image respectively to obtain corresponding stair marking images;
the picture grouping unit is connected with the picture marking unit and is used for grouping each stair marking picture to obtain a training set and a test set;
the model training unit is connected with the picture grouping unit and used for training according to each stair labeling picture in the training set to obtain a stair semantic segmentation model;
and the model verification unit is respectively connected with the picture grouping unit and the model training unit and is used for verifying the stair semantic segmentation model according to each stair labeling picture in the test set and storing the semantic segmentation model meeting the preset verification standard.
CN202010152534.1A 2020-03-06 2020-03-06 Automatic identification method and system for stair area Active CN111368749B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010152534.1A CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010152534.1A CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Publications (2)

Publication Number Publication Date
CN111368749A true CN111368749A (en) 2020-07-03
CN111368749B CN111368749B (en) 2023-06-13

Family

ID=71211779

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010152534.1A Active CN111368749B (en) 2020-03-06 2020-03-06 Automatic identification method and system for stair area

Country Status (1)

Country Link
CN (1) CN111368749B (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529903A (en) * 2021-02-03 2021-03-19 德鲁动力科技(成都)有限公司 Stair height and width visual detection method and device and robot dog
CN113469059A (en) * 2021-07-02 2021-10-01 智能移动机器人(中山)研究院 Stair identification method based on binocular vision

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683770A (en) * 2015-03-12 2015-06-03 杨明 Campus teaching building security monitoring system and method based on image recognition
WO2018125914A1 (en) * 2016-12-31 2018-07-05 Vasuyantra Corp., A Delaware Corporation Method and device for visually impaired assistance
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
CN110207704A (en) * 2019-05-21 2019-09-06 南京航空航天大学 A kind of pedestrian navigation method based on the identification of architectural stair scene intelligent
CN110705366A (en) * 2019-09-07 2020-01-17 创新奇智(广州)科技有限公司 Real-time human head detection method based on stair scene
KR102080532B1 (en) * 2019-09-17 2020-02-24 영남대학교 산학협력단 Apparatus and method for floor identification

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104683770A (en) * 2015-03-12 2015-06-03 杨明 Campus teaching building security monitoring system and method based on image recognition
WO2018125914A1 (en) * 2016-12-31 2018-07-05 Vasuyantra Corp., A Delaware Corporation Method and device for visually impaired assistance
CN109766868A (en) * 2019-01-23 2019-05-17 哈尔滨工业大学 A kind of real scene based on body critical point detection blocks pedestrian detection network and its detection method
CN110207704A (en) * 2019-05-21 2019-09-06 南京航空航天大学 A kind of pedestrian navigation method based on the identification of architectural stair scene intelligent
CN110705366A (en) * 2019-09-07 2020-01-17 创新奇智(广州)科技有限公司 Real-time human head detection method based on stair scene
KR102080532B1 (en) * 2019-09-17 2020-02-24 영남대학교 산학협력단 Apparatus and method for floor identification

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
顾昊 等: "基于卷积神经网络的楼梯识别", 《图形图像》 *

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112529903A (en) * 2021-02-03 2021-03-19 德鲁动力科技(成都)有限公司 Stair height and width visual detection method and device and robot dog
CN113469059A (en) * 2021-07-02 2021-10-01 智能移动机器人(中山)研究院 Stair identification method based on binocular vision

Also Published As

Publication number Publication date
CN111368749B (en) 2023-06-13

Similar Documents

Publication Publication Date Title
CN108446630B (en) Intelligent monitoring method for airport runway, application server and computer storage medium
CN111709310B (en) Gesture tracking and recognition method based on deep learning
CN110765964B (en) Method for detecting abnormal behaviors in elevator car based on computer vision
Sidla et al. Pedestrian detection and tracking for counting applications in crowded situations
CN105740945B (en) A kind of people counting method based on video analysis
CN105260705B (en) A kind of driver's making and receiving calls behavioral value method suitable under multi-pose
CN111241975B (en) Face recognition detection method and system based on mobile terminal edge calculation
CN111126399A (en) Image detection method, device and equipment and readable storage medium
CN108288047A (en) A kind of pedestrian/vehicle checking method
CN111368749B (en) Automatic identification method and system for stair area
CN105809716B (en) Foreground extraction method integrating superpixel and three-dimensional self-organizing background subtraction method
CN106815560A (en) It is a kind of to be applied to the face identification method that self adaptation drives seat
KR101246120B1 (en) A system for recognizing license plate using both images taken from front and back faces of vehicle
CN110633671A (en) Bus passenger flow real-time statistical method based on depth image
CN103065163B (en) A kind of fast target based on static images detects recognition system and method
CN111368682A (en) Method and system for detecting and identifying station caption based on faster RCNN
CN111553214A (en) Method and system for detecting smoking behavior of driver
CN115303901A (en) Elevator traffic flow identification method based on computer vision
CN112766273A (en) License plate recognition method
CN112836667B (en) Method for judging falling and reverse running of passengers going upstairs escalator
CN108537815B (en) Video image foreground segmentation method and device
CN111626107B (en) Humanoid contour analysis and extraction method oriented to smart home scene
CN112115737B (en) Vehicle orientation determining method and device and vehicle-mounted terminal
JP5080416B2 (en) Image processing apparatus for detecting an image of a detection object from an input image
CN109726750A (en) A kind of passenger falls down detection device and its detection method and passenger conveying appliance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant