CN116486324B - Subway seat trampling behavior detection method, device, equipment and storage medium - Google Patents

Subway seat trampling behavior detection method, device, equipment and storage medium Download PDF

Info

Publication number
CN116486324B
CN116486324B CN202310217180.8A CN202310217180A CN116486324B CN 116486324 B CN116486324 B CN 116486324B CN 202310217180 A CN202310217180 A CN 202310217180A CN 116486324 B CN116486324 B CN 116486324B
Authority
CN
China
Prior art keywords
passenger
subway
subway seat
side line
seat side
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202310217180.8A
Other languages
Chinese (zh)
Other versions
CN116486324A (en
Inventor
陈磊
滕爱
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Qichang Technology Co ltd
Original Assignee
Shenzhen Qichang Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Qichang Technology Co ltd filed Critical Shenzhen Qichang Technology Co ltd
Priority to CN202310217180.8A priority Critical patent/CN116486324B/en
Publication of CN116486324A publication Critical patent/CN116486324A/en
Application granted granted Critical
Publication of CN116486324B publication Critical patent/CN116486324B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Human Computer Interaction (AREA)
  • Social Psychology (AREA)
  • Psychiatry (AREA)
  • Computational Linguistics (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • Databases & Information Systems (AREA)
  • Medical Informatics (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a subway seat trampling behavior detection method, device and equipment and a storage medium, and relates to the technical field of subway operation and maintenance management. The method comprises the steps of firstly identifying and obtaining a subway seat side line identification result and a passenger identification result in real time according to field video data acquired by a monitoring camera in a subway carriage, then identifying whether upward line crossing conditions exist or not according to the subway seat side line identification result in real time for each identified passenger, then identifying the embraced state of the passenger according to corresponding passenger images in real time for each line crossing passenger, and finally determining that the corresponding passenger has subway seat stepping behaviors and triggering corresponding prompting actions when the line crossing passenger is found to be in a non-embraced state, so that video image detection and prompting actions can be automatically carried out on the subway seat stepping behaviors, further timely finding the stepping behaviors and prompting can be ensured, the behavior inhibition effect is improved, and daily maintenance of subway environmental sanitation is facilitated.

Description

Subway seat trampling behavior detection method, device, equipment and storage medium
Technical Field
The invention belongs to the technical field of subway operation and maintenance management, and particularly relates to a subway seat trampling behavior detection method, device and equipment and a storage medium.
Background
Subway is used as an important transportation means for people in modern cities to travel, and huge passenger flow is carried every day. Video monitoring is a type of management equipment widely distributed in subways, and is mainly used for remote monitoring of activities in subway carriages and stations and halls and data retention, and various abnormal events are identified based on monitoring images, so that the video monitoring is a common and low-cost management means.
Because the subway needs to be used at high frequency in daytime, the environmental sanitation mainly depends on the conscious appeasing of passengers, and if an individual makes some unhappy behaviors in the traveling of taking the subway, the sanitation and the experience of other passengers of the subway can be affected, for example, the trampling behavior of the subway seat is a behavior which is considered to be obviously unhappy because of the pollution to the seat. However, aiming at the trampling behavior of the subway seat, the aim of prohibiting the behavior is achieved mainly by means of manual supervision and manual recommendation, and the corresponding video image detection means and automatic recommendation means are lacked, so that the problems of late behavior discovery, untimely recommendation and poor behavior prohibition effect exist, and the daily maintenance of subway environmental sanitation is not facilitated.
Disclosure of Invention
The invention aims to provide a subway seat trampling behavior detection method, a subway seat trampling behavior detection device, computer equipment and a computer readable storage medium, which are used for solving the problems that the existing subway seat trampling behavior lacks a corresponding video image detection means and an automatic prompting means, so that the behavior is late in discovery, is not prompted in time and has a poor behavior inhibition effect.
In order to achieve the above purpose, the present invention adopts the following technical scheme:
in a first aspect, a method for detecting a trampling behavior of a subway seat is provided, including:
acquiring field video data acquired in real time by a monitoring camera in a subway carriage;
carrying out subway seat side line identification real-time processing on the field video data by adopting a subway seat side line identification algorithm to obtain a subway seat side line identification result;
performing passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger;
judging whether the bottom edge center of the corresponding human body mark frame upwards passes over the subway seat side line according to the subway seat side line identification result in real time for each passenger in the at least one passenger, if so, taking the corresponding passenger as a line-crossing passenger, and intercepting a corresponding passenger image from the on-site video data according to the corresponding human body mark frame;
Inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and has been trained in advance for each line-crossing passenger in the at least one passenger, and outputting to obtain a corresponding passenger embracing state recognition result;
and for each line-crossing passenger, if the corresponding passenger in the enclasped state is indicated to be in the non-enclasped state by the corresponding passenger enclasped state identification result, determining that the subway seat stepping action exists for the corresponding passenger, and triggering the corresponding prompting action.
Based on the above-mentioned summary, a data processing scheme for automatically identifying the trampling behavior of a subway seat based on a monitoring image in a subway carriage is provided, namely, firstly, a subway seat side line identification result and a passenger identification result are obtained through real-time identification according to field video data collected by a monitoring camera in the subway carriage, then, whether upward line crossing situations exist or not is identified in real time according to the identified passenger side line identification result of each subway seat, then, the embraced state identification of the passenger is carried out according to the corresponding passenger image in real time according to each line crossing passenger, finally, when the line crossing passenger is found to be in an unbended state, the existence of the subway seat trampling behavior of the corresponding passenger is determined, and the corresponding counseling action is triggered, so that the video image detection and the counseling action are automatically carried out according to the subway seat trampling behavior, further, the timely discovery of the trampling behavior and the counseling are ensured, the effect of the performance is improved, the daily maintenance of subway environmental sanitation is facilitated, and practical application and popularization are facilitated.
In one possible design, the performing subway seat side line recognition real-time processing on the field video data by adopting a subway seat side line recognition algorithm to obtain a subway seat side line recognition result includes:
converting live video images in the live video data into gray-scale images in real time;
carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image;
performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as subway seat border pixel points;
performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest;
performing Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming a subway seat side line;
fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous subway seat side line;
loading the subway seat side line into the field video image in real time to obtain a subway seat side line identification result.
In one possible design, the performing subway seat side line recognition real-time processing on the field video data by adopting a subway seat side line recognition algorithm to obtain a subway seat side line recognition result includes:
performing passenger statistics processing on the live video images in the live video data to obtain passenger statistics results;
judging whether the passenger statistical result is smaller than or equal to a preset passenger number threshold value;
if so, a subway seat side line is obtained according to the on-site video image recognition, the subway seat side line is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, otherwise, the subway seat side line obtained according to the previous video image is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, wherein the previous video image is also positioned in the on-site video data and refers to the nearest video image of which the corresponding passenger statistical result is less than or equal to the passenger number threshold and is positioned in front of the on-site video image on a time axis.
In one possible design, the passenger identification real-time processing is performed on the live video data by using a target detection algorithm to obtain a passenger identification result, including:
And importing the live video image in the live video data into a passenger identification model which is based on a YOLOv4 target detection algorithm and is trained in advance to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger.
In one possible design, for each passenger in the at least one passenger, determining whether the bottom edge center of the corresponding human body mark frame passes over the subway seat side line upwards according to the subway seat side line identification result in real time includes:
for each passenger in the at least one passenger, determining corresponding bottom edge center coordinates in real time according to the corresponding human body mark frame;
and judging whether the corresponding bottom edge center coordinates are positioned on the upper side of the identified subway seat side line according to the subway seat side line identification result in real time for each passenger, if so, judging that the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line, otherwise, judging that the bottom edge center of the corresponding human body mark frame upwards does not cross the subway seat side line.
In one possible design, for each of the at least one passenger, inputting the corresponding passenger image into a passenger embraced state recognition model based on a convolutional neural network and trained in advance, and outputting a corresponding passenger embraced state recognition result, including:
And inputting a corresponding passenger image into a passenger embracing state recognition model which is based on a depth residual network ResNet_34 and is trained in advance for each line-crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result.
In one possible design, triggering a corresponding prompting action for a certain crossing passenger who has determined that there is a subway seat stepping action includes:
determining the position of the human body mark frame of the certain line crossing passenger in the subway carriage according to the position of the human body mark frame of the certain line crossing passenger in the field video image of the field video data;
according to pre-stored arrangement data of the subway voice broadcasting devices, the subway voice broadcasting device closest to the position in the subway carriage of the certain overtaking passenger is found, and when the subway voice broadcasting device is idle, the subway voice broadcasting device is controlled to play an audio file for prompting the stopping of the stepping action of the subway seat.
The second aspect provides a subway seat trampling behavior detection device, which comprises a data acquisition module, a subway seat side line identification module, a passenger identification module, a line crossing judgment module, a passenger state identification module and a behavior confirmation module;
The data acquisition module is used for acquiring field video data acquired in real time by a monitoring camera in the subway carriage;
the subway seat side line identification module is in communication connection with the data acquisition module and is used for carrying out real-time subway seat side line identification processing on the field video data by adopting a subway seat side line identification algorithm to obtain a subway seat side line identification result;
the passenger identification module is in communication connection with the data acquisition module and is used for carrying out passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger;
the line crossing judging module is respectively in communication connection with the subway seat side line identifying module and the passenger identifying module, and is used for judging whether the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line according to the subway seat side line identifying result in real time for each passenger in the at least one passenger, if so, the corresponding passenger is taken as a line crossing passenger, and a corresponding passenger image is intercepted from the on-site video data according to the corresponding human body mark frame;
The passenger state recognition module is in communication connection with the line crossing judgment module and is used for inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and is trained in advance for each line crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result;
the behavior confirmation module is in communication connection with the passenger state identification module and is used for determining that the subway seat stepping behavior exists for the corresponding passenger and triggering corresponding prompting actions if the corresponding passenger is in the non-embraced state according to the embraced state identification result of the corresponding passenger for each line-crossing passenger.
In a third aspect, the present invention provides a computer device, comprising a memory, a processor and a transceiver, which are in communication connection in turn, wherein the memory is configured to store a computer program, the transceiver is configured to send and receive a message, and the processor is configured to read the computer program, and execute the subway seat trampling behavior detection method according to the first aspect or any possible design of the first aspect.
In a fourth aspect, the present invention provides a computer readable storage medium having instructions stored thereon which, when run on a computer, perform the subway seat trampling behaviour detection method as described in the first aspect or any of the possible designs of the first aspect.
In a fifth aspect, the present invention provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the subway seat trampling behaviour detection method as described in the first aspect or any of the possible designs of the first aspect.
The beneficial effect of above-mentioned scheme:
(1) The invention creatively provides a data processing scheme for automatically identifying the trampling behavior of a subway seat based on a monitoring image in a subway carriage, namely, a subway seat side line identification result and a passenger identification result are firstly obtained through real-time identification according to field video data acquired by a monitoring camera in the subway carriage, then whether upward line crossing conditions exist or not is identified in real time according to the identified passengers according to the subway seat side line identification result, then the embracing state of the passengers is identified in real time according to corresponding passenger images for each line crossing passenger, finally, when the line crossing passengers are found to be in a non-embracing state, the existence of the trampling behavior of the subway seat is determined, and corresponding counseling actions are triggered, so that video image detection and counseling actions can be automatically carried out according to the trampling behavior of the subway seat, and regular counseling can be further ensured, the effect of the performance is improved, and daily maintenance of subway environmental sanitation is facilitated;
(2) And different subway seat side line acquisition modes are selected according to the current passenger congestion condition in the carriage, so that when the phenomenon that the subway seat side line is blocked due to the passenger congestion exists, a real-time subway seat side line identification result can be obtained, and the method is convenient for practical application and popularization.
Drawings
In order to more clearly illustrate the embodiments of the invention or the technical solutions in the prior art, the drawings that are required in the embodiments or the description of the prior art will be briefly described, it being obvious that the drawings in the following description are only some embodiments of the invention, and that other drawings may be obtained according to these drawings without inventive effort for a person skilled in the art.
Fig. 1 is a flow chart of a subway seat stepping behavior detection method according to an embodiment of the present application.
Fig. 2 is an exemplary diagram of a region of interest, a human body marker frame, and a subway seat border in a live video image according to an embodiment of the present application.
Fig. 3 is a schematic structural diagram of a subway seat pedaling behavior detection device provided in an embodiment of the present application.
Fig. 4 is a schematic structural diagram of a computer device according to an embodiment of the present application.
Detailed Description
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the present invention will be briefly described below with reference to the accompanying drawings and the description of the embodiments or the prior art, and it is obvious that the following description of the structure of the drawings is only some embodiments of the present invention, and other drawings can be obtained according to these drawings without inventive effort to a person skilled in the art. It should be noted that the description of these examples is for aiding in understanding the present invention, but is not intended to limit the present invention.
It should be understood that although the terms first and second, etc. may be used herein to describe various objects, these objects should not be limited by these terms. These terms are only used to distinguish one object from another. For example, a first object may be referred to as a second object, and similarly a second object may be referred to as a first object, without departing from the scope of example embodiments of the invention.
It should be understood that for the term "and/or" that may appear herein, it is merely one association relationship that describes an associated object, meaning that there may be three relationships, e.g., a and/or B, may represent: three cases of A alone, B alone or both A and B exist; as another example, A, B and/or C, can represent the presence of any one of A, B and C or any combination thereof; for the term "/and" that may appear herein, which is descriptive of another associative object relationship, it means that there may be two relationships, e.g., a/and B, it may be expressed that: the two cases of A and B exist independently or simultaneously; in addition, for the character "/" that may appear herein, it is generally indicated that the context associated object is an "or" relationship.
Examples:
as shown in fig. 1, the method for detecting the trampling behavior of the subway seat according to the first aspect of the present embodiment may be performed by, but not limited to, a computer device having a certain computing resource and being communicatively connected to a monitoring camera in a subway car, for example, a vehicle-mounted control center device, a platform server, a personal computer (PersonalComputer, PC, a multipurpose computer with a size, price and performance suitable for personal use, a desktop computer, a notebook computer, a small notebook computer, a tablet computer, an ultrabook, etc. all belong to a personal computer), a smart phone, a personal digital assistant (PersonalDigitalAssistant, PDA), or an electronic device such as a wearable device. As shown in fig. 1, the subway seat tread behavior detection method may include, but is not limited to, the following steps S1 to S6.
S1, acquiring field video data acquired in real time by a monitoring camera in a subway carriage.
In the step S1, the intra-subway-carriage monitoring camera is an existing common-use subway video monitoring camera, and is mainly used for remote monitoring and data retention in a subway carriage. The field of view of the lens of the monitoring camera in the subway carriage covers the region in the subway carriage and is used for collecting video frame images of the region in the subway carriage in real time to obtain field video data comprising a plurality of continuous video frame images. In addition, the monitoring camera in the subway carriage can transmit acquired data to local equipment in a conventional mode.
S2, carrying out subway seat side line identification real-time processing on the field video data by adopting a subway seat side line identification algorithm, and obtaining a subway seat side line identification result.
In the step S2, since the subway seat stepping behavior is an unhygienic behavior specific to the subway seat, it is necessary to determine whether or not there is a subway seat stepping body condition from the recognition result of the subway seat side line. Specifically, the subway seat side line recognition algorithm is adopted to perform real-time subway seat side line recognition processing on the field video data, so that a subway seat side line recognition result is obtained, and the method comprises the following steps S21-S27.
S21, converting the live video image in the live video data into a gray level image in real time.
In the step S21, the live video image in RGB format may be directly converted into the grayscale image in real time using, but not limited to, cvttcolor function in cross-platform computer vision library Opencv. The cvttcolor function has three interface parameters, which are respectively: input images, output images, and format conversion categories.
S22, carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image.
In the step S22, gaussian filtering, also called gaussian blur, may specifically, but not limited to, use the gaussian blue function in the cross-platform computer vision library Opencv to perform gaussian filtering real-time processing on the gray image, so as to reject some noise points in the original image (if no gaussian filtering is used, some insignificant features in the image cannot be avoided when the original image is directly processed, and some less clear noise points are deleted after the gaussian blur is passed). The five interface parameters of the gaussian blue function are respectively: the input image, the output image, the gaussian kernel, the standard deviation of the gaussian kernel in the X direction and the standard deviation of the gaussian kernel in the Y direction, wherein the gaussian kernel is composed of two dimensions of width and height, which may use different values but must be either positive odd or 0, whereas the standard deviation of the gaussian kernel in both X and Y directions, respectively, is typically set to 0.
S23, performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as subway seat side line pixel points.
In the step S23, since there is a clear boundary feature between the subway seat surface and the dark area below the subway seat, the detected edge pixel point may be regarded as a subway seat side line pixel point. The edge detection real-time processing can be specifically performed on the denoising image by using a Canny function in the cross-platform computer vision library Opencv, but is not limited to. The Canny function has 5 interface parameters, which are respectively: the method comprises the steps of inputting an image, outputting the image, a threshold value 1, a threshold value 2 and aperture parameters of a sobel operator, wherein the threshold value 1 and the threshold value 2 are used as the basis for judging whether each pixel point is an edge pixel point or not: i.e. a pixel below said threshold 1 will be considered as not an edge pixel, a pixel above said threshold 2 will be considered as an edge pixel, whereas for a pixel between said threshold 1 and said threshold 2, if adjacent to a pixel above said threshold 2, it is considered as an edge pixel, otherwise it is considered as not an edge pixel. In addition, the aperture parameter of the solid operator is generally defaulted to 3, i.e., a matrix denoted as 3*3.
S24, performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest.
In the step S24, it is considered that the edge detection result image obtained by edge detection contains a lot of environmental information which is not interesting, and thus mask extraction of the desired information is required. As shown in fig. 2, considering a trapezoidal area where a subway seat border is generally located below an image, 4 points may be manually set in advance, and four corner points of the trapezoidal area are formed so as to define the region of interest by the four corner points; the trapezoidal region may be drawn in particular, but not limited to, using a fillcovexpoly function in the cross-platform computer vision library Opencv. The fileconvexpoly function described above has a total of 4 interface parameters: blank (the size is consistent with the original), corner information, the number of sides of the polygon and line color. After the region of interest is obtained, the region of interest can be used as a trapezoidal mask region to perform bitwise_and operation with the edge detection result image, so that an edge detection result only in the region of interest is obtained, and as shown in fig. 2, only subway seat side line information can be seen. The bitwise_and operation is an existing operation of performing an and operation on two images, and the corresponding functions of the bitwise_and operation have 3 interface parameters, which are respectively: a mask map (which contains the region of interest), an artwork (i.e., the edge detection result image), and an output map. In addition, it should be noted that the sizes of the three images and the number of color channels are consistent.
S25, carrying out Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming the subway seat side line.
In the step S25, since the edge pixels in the new edge detection result image are still independent pixels and are not connected to form a line, it is necessary to find a straight line segment in the image based on the edge pixels by hough transform. The hough transform has 3 existing modes: standard hough transform, multi-scale hough transform and cumulative probability hough transform, wherein the first two uses HoughLines functions and the last one uses HoughLinesP functions. Since the execution efficiency of the hough transform is higher, there is generally a greater tendency to use the hough transform, i.e. the present embodiment also employs the hough transform. The Hough transformation converts a line in a Cartesian coordinate system into a polar coordinate system, namely, a set of all straight lines passing through a point in the Cartesian coordinate system is a sine curve in the polar coordinate system, and points represented by the curves are on the same straight line; the hough transform is to find the intersection points to determine which pixels are on the same straight line. In addition, if the lens of the monitoring camera in the subway car is arranged in the center of the top of the subway car and is arranged towards/against the direction of the vehicle, the obtained straight line segments can be determined according to the inclination relation between the obtained straight line segments and the longitudinal center line of the image, wherein the straight line segments are left straight line segments for forming the left subway seat side line, and the straight line segments are right straight line segments for forming the right subway seat side line.
S26, fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous subway seat side line.
In the step S26, the subway seat side line is considered to be blocked by the leg of the human body and imaged in the form of a dotted line, so that fitting processing is required to be performed on the straight line segment obtained by hough transform, and a continuous subway seat side line is obtained. Furthermore, the slope and intercept are common terms in mathematics, so that a complete continuous straight line can be drawn based on the slope average value and the intercept average value of a plurality of straight line segments through the characteristics of a conventional straight line function.
S27, loading the subway seat side line into the field video image in real time to obtain a subway seat side line identification result.
In the step S27, the subway seat edge may be specifically, but not limited to, loaded into the live video image in real time using an addWeighted function in the cross-platform computer vision library Opencv. The addWeighted function has 6 interface parameters, which are respectively: original fig. 1 (i.e. the image containing the subway seat border), transparency of fig. 1, original fig. 2 (i.e. the live video image), transparency of fig. 2, weighting value (typically set to 0) and output map.
Thus, the subway seat side line recognition/detection task for the live video image of the sheet Zhang Suo can be completed through the foregoing steps S21 to S27.
S3, carrying out passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger.
In the step S3, the target detection algorithm is an existing artificial intelligence recognition algorithm for recognizing the object inside and marking the object position in the picture, specifically, but not limited to, the target detection algorithm may be proposed in 2015 by using FasterR-CNN (FasterRegions withConvolutionalNeuralNetworksfeatures, he Kaiming, etc., which obtains a plurality of first target detection algorithms in the ILSVRV and COCO contest in 2015, SSD (SingleShot MultiBoxDetector, single lens multi-box detector, one of the target detection algorithms proposed by WeiLiu on ECCV2016, one of the currently popular main detection frameworks), the target detection algorithm or YOLO (youlylokonce), which has been developed recently to V4 version, and is widely applied in industry, the basic principle of which is that the input image is divided into 7x7 grids, 2 frames are predicted for each grid, then the window is removed according to the target window with a low possibility of threshold removal, and finally the window is removed again by using the frame combining method, so as to obtain the target detection algorithm, and the passenger recognition result can be processed based on the target detection algorithm.
Specifically, the on-site video data is subjected to passenger identification real-time processing by using a target detection algorithm to obtain passenger identification results, including but not limited to: and importing the live video image in the live video data into a passenger identification model which is based on a YOLOv4 target detection algorithm and is trained in advance to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger. The specific model structure of the YOLOv4 target detection algorithm consists of three parts, namely a backbone network back, a neck network neg and a head network head. The Backbone network Backbone may employ a CSPDarknet53 (CSP stands for CrossStagePartial) network for extracting features. The neck network neg consists of SPP (SpatialPyramidPoolingblock) blocks for adding receptive fields and separating out the most important features and PANet (PathAggregationNetwork) networks for ensuring that semantic features are accepted from the higher level layers and fine-grained features are accepted from the lower level layers of the transverse backbone network at the same time. The head network head is based on anchor boxes and detects three different sized feature maps 13x13, 26x26 and 52x52 for detecting large to small objects respectively (here, the large sized feature map is more informative and thus the 52x52 sized feature map is used for detecting small objects and vice versa). The passenger recognition model can be trained by a conventional sample training mode, so that after a test image is input, information such as whether a passenger recognition result and a confidence prediction value of the passenger recognition result can be output.
S4, aiming at each passenger in the at least one passenger, judging whether the bottom edge center of the corresponding human body mark frame upwards passes over the subway seat side line according to the subway seat side line identification result in real time, if so, taking the corresponding passenger as a line-crossing passenger, and intercepting a corresponding passenger image from the on-site video data according to the corresponding human body mark frame.
In the step S4, since the subway seat side line recognition result includes the recognized subway seat side line, and the bottom foot of the passenger performing the subway seat stepping action is necessarily located above the subway seat side line, whether the passenger bottom line is crossed or not can be used as a precondition for whether the subway seat stepping action is performed or not. Specifically, for each passenger in the at least one passenger, whether the bottom edge center of the corresponding human body mark frame passes over the subway seat side line upwards is judged in real time according to the subway seat side line recognition result, including but not limited to the following steps S41 to S42.
S41, determining corresponding bottom edge center coordinates according to corresponding human body mark frames in real time for each passenger in the at least one passenger.
S42, judging whether the corresponding bottom edge center coordinates are located on the upper side of the identified subway seat side line according to the subway seat side line identification result in real time for each passenger, if so, judging that the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line, otherwise, judging that the bottom edge center of the corresponding human body mark frame upwards does not cross the subway seat side line.
S5, inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and is trained in advance for each line-crossing passenger in the at least one passenger, and outputting and obtaining corresponding passenger embracing state recognition results.
In the step S5, the convolutional neural network CNN (ConvolutionalNeuralNetworks) is a feedforward neural network (feedforward neural networks) with a depth structure based on convolutional calculation, and has a feedforward neural network structure composed of an input layer, a convolutional layer, an active layer, a pooling layer, a full-connection layer and an output layer, and can use a normalized index Softmax function to perform image recognition classification through the output layer, so that after conventional training is performed based on a positive sample image/a negative sample image corresponding to a held state of a passenger and a passenger held state recognition model is obtained, the passenger image can be imported into the passenger held state recognition model to obtain a corresponding classification tag recognition result, namely, to indicate that the corresponding passenger is in a held state/a non-held state, and the accuracy and the misjudgment rate can be obtained through experiments by testing a sample set. The convolutional neural network may specifically, but not exclusively, use a depth residual network res net_34, that is, specifically, for each line-crossing passenger in the at least one passenger, input a corresponding passenger image into a passenger embracing state recognition model based on the convolutional neural network and having been trained in advance, and output and obtain a corresponding passenger embracing state recognition result, including but not limited to: and inputting a corresponding passenger image into a passenger embracing state recognition model which is based on a depth residual network ResNet_34 and is trained in advance for each line-crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result. The depth residual network res net_34 is a novel network structure capable of short-circuiting connection, which is proposed for the problem that the network is degraded after the number of layers of the conventional deepened convolutional neural network is increased (i.e. the prediction error is increased along with the increase of the number of layers of the network), only residual terms can be learned, and because residual learning is easier than original feature learning, if the learned residual value F (x) is 0, an identity mapping is equivalent, so that after the depth residual network res net_34 is applied to the embodiment, the held state recognition result of the passengers can be obtained quickly and accurately.
S6, aiming at each line crossing passenger, if the corresponding passenger in the embraced state is indicated to be in the non-embraced state by the corresponding passenger in the embraced state identification result, determining that the subway seat stepping behavior exists for the corresponding passenger, and triggering corresponding prompting actions.
In the step S6, as shown in fig. 2, since the bottom edge center of the body mark frame of the passenger a passes over the subway seat side line and the passenger a is in the unbuckled state, it can be determined that the passenger a has a subway seat stepping action, and a corresponding dissuading action needs to be triggered, for example, a subway inspector is notified to go to the position of the passenger a for manual dissuading. In order to achieve the automatic recommendation, it is preferable that, for a certain line-crossing passenger who has determined that there is a subway seat stepping action, a corresponding recommendation action is triggered, including, but not limited to, the following steps S61 to S62: s61, determining the position of the person mark frame of the certain line crossing passenger in the subway carriage according to the position of the person mark frame of the certain line crossing passenger in the field video image of the field video data (namely, determining the position in the subway carriage based on a conventional position mapping mode because the lens direction of the monitoring camera in the subway carriage is fixed/determined); s62, according to pre-stored subway voice broadcasting device arrangement data, searching a subway voice broadcasting device nearest to the position in a subway compartment of a certain overtaking passenger, and controlling the subway voice broadcasting device to play an audio file for prompting stopping of the stepping action of a subway seat when the subway voice broadcasting device is idle (for example, when a station is not reported).
The method for detecting the subway seat trampling behavior based on the steps S1-S6 provides a data processing scheme for automatically identifying the subway seat trampling behavior based on the monitoring images in the subway carriage, namely, a subway seat side line identification result and a passenger identification result are firstly obtained through real-time identification according to field video data collected by the monitoring cameras in the subway carriage, then whether upward line crossing exists or not is identified in real time according to the identified subway seat side line identification result for each passenger, then the corresponding passenger is identified according to the corresponding passenger image in real time for each line crossing passenger, finally, when the line crossing passenger is found to be in a non-embracing state, the corresponding passenger is determined to exist in the subway seat trampling behavior, and corresponding counseling actions are triggered, so that video image detection and trampling counseling actions can be automatically carried out according to the subway seat trampling behavior, the timely discovery of the trampling behavior and the counseling actions can be ensured, the inhibition effect of the subway environment is improved, and the daily maintenance of the subway environment sanitation is facilitated, and the actual application and popularization are facilitated.
The embodiment further provides a possible design one of how to effectively perform real-time subway seat side line recognition processing on the basis of the technical scheme of the first aspect, that is, the subway seat side line recognition algorithm is adopted to perform real-time subway seat side line recognition processing on the field video data, so as to obtain a subway seat side line recognition result, including but not limited to the following steps S201 to S203.
S201, carrying out passenger statistical processing on the live video images in the live video data to obtain passenger statistical results.
In the step S201, the specific manner of the passenger statistics processing is the existing conventional manner, for example, the real-time passenger identification processing is performed on the live video image by using the target detection algorithm, and then the passenger statistics is performed according to the passenger identification result, so as to obtain the number of passengers.
S202, judging whether the passenger statistical result is smaller than or equal to a preset passenger number threshold value.
In the step S202, the purpose of the determination is to confirm the congestion of the passengers in the current car, if the congestion is less than or equal to the threshold value of the number of passengers, it is indicated that the passengers in the current car are not congested, so that the phenomenon that the subway seat side line is blocked due to the congestion of the passengers can be avoided, and the subway seat side line can be ensured to be identified later, otherwise. In addition, the threshold number of passengers may be set specifically according to the size of the field of view of the monitoring camera in the subway car, for example, 10 persons.
S203, if yes, a subway seat side line is obtained according to the on-site video image recognition, the subway seat side line is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, otherwise, the subway seat side line obtained according to the previous video image is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, wherein the previous video image is also positioned in the on-site video data and refers to a nearest video image of which the corresponding passenger statistical result is smaller than or equal to the passenger number threshold and is positioned in front of the on-site video image on a time axis.
In step S203, the specific manner of obtaining the subway seat side line according to the on-site video image recognition may be referred to in steps S21 to S26 of the first aspect, and will not be described herein. Since the lens direction of the monitoring camera in the subway car is fixed/determined, when the phenomenon that the subway seat side line is blocked due to the crowding of passengers is considered, the subway seat side line obtained according to the previous video image can be directly used as the current subway seat side line, so that the real-time subway seat side line identification result can be ensured. In addition, the specific loading manner can be referred to the step S27 of the first aspect, and will not be described herein again.
Based on the first possible design, by selecting different subway seat side line acquisition modes according to the crowding condition of passengers in the current carriage, the method can ensure that when the phenomenon that the subway seat side line is blocked due to the crowding condition of the passengers, the real-time recognition result of the subway seat side line can be obtained.
As shown in fig. 3, a second aspect of the present embodiment provides a virtual device for implementing the method for detecting a trampling behavior of a subway seat according to the first aspect or possibly designing the method for detecting a trampling behavior of a subway seat, where the virtual device includes a data acquisition module, a subway seat side line identification module, a passenger identification module, a line crossing judgment module, a passenger status identification module, and a behavior confirmation module;
The data acquisition module is used for acquiring field video data acquired in real time by a monitoring camera in the subway carriage;
the subway seat side line identification module is in communication connection with the data acquisition module and is used for carrying out real-time subway seat side line identification processing on the field video data by adopting a subway seat side line identification algorithm to obtain a subway seat side line identification result;
the passenger identification module is in communication connection with the data acquisition module and is used for carrying out passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger;
the line crossing judging module is respectively in communication connection with the subway seat side line identifying module and the passenger identifying module, and is used for judging whether the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line according to the subway seat side line identifying result in real time for each passenger in the at least one passenger, if so, the corresponding passenger is taken as a line crossing passenger, and a corresponding passenger image is intercepted from the on-site video data according to the corresponding human body mark frame;
The passenger state recognition module is in communication connection with the line crossing judgment module and is used for inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and is trained in advance for each line crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result;
the behavior confirmation module is in communication connection with the passenger state identification module and is used for determining that the subway seat stepping behavior exists for the corresponding passenger and triggering corresponding prompting actions if the corresponding passenger is in the non-embraced state according to the embraced state identification result of the corresponding passenger for each line-crossing passenger.
The working process, working details and technical effects of the foregoing device provided in the second aspect of the present embodiment may refer to the first aspect or may possibly design a method for detecting a trampling behavior of a subway seat, which will not be described herein.
As shown in fig. 4, a third aspect of the present embodiment provides a computer device for executing the subway seat pedaling behavior detection method according to the first aspect or the possible design, which includes a memory, a processor, and a transceiver, all of which are connected in communication, where the memory is configured to store a computer program, and the transceiver is configured to send and receive a message, and the processor is configured to read the computer program, and execute the subway seat pedaling behavior detection method according to the first aspect or the possible design. By way of specific example, the Memory may include, but is not limited to, random access Memory (Random-AccessMemory, RAM), read-Only Memory (ROM), flash Memory (flash Memory), first-in first-out Memory (FirstInputFirstOutput, FIFO), and/or first-in last-out Memory (FirstInputLastOutput, FILO), and the like; the processor may be, but is not limited to, a microprocessor of the type STM32F105 family. In addition, the computer device may include, but is not limited to, a power module, a display screen, and other necessary components.
The working process, working details and technical effects of the foregoing computer device provided in the third aspect of the present embodiment may refer to the first aspect or may possibly design the subway seat tread behavior detection method, which is not described herein again.
A fourth aspect of the present embodiment provides a computer-readable storage medium storing instructions containing the subway seat trampling behavior detection method as described in the first aspect or the possible design one, i.e., the computer-readable storage medium has instructions stored thereon that, when executed on a computer, perform the subway seat trampling behavior detection method as described in the first aspect or the possible design one. The computer readable storage medium refers to a carrier for storing data, and may include, but is not limited to, a floppy disk, an optical disk, a hard disk, a flash memory, and/or a memory stick (memory stick), where the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable devices.
The working process, working details and technical effects of the foregoing computer readable storage medium provided in the fourth aspect of the present embodiment may be referred to as the first aspect or the possible design of the subway seat tread behavior detection method, and will not be described herein.
A fifth aspect of the present embodiment provides a computer program product comprising instructions which, when run on a computer, cause the computer to perform the subway seat trampling behaviour detection method as described in the first aspect or as a possible design. Wherein the computer may be a general purpose computer, a special purpose computer, a computer network, or other programmable apparatus.
Finally, it should be noted that: the foregoing description is only of the preferred embodiments of the invention and is not intended to limit the scope of the invention. Any modification, equivalent replacement, improvement, etc. made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. The subway seat trampling behavior detection method is characterized by comprising the following steps of:
acquiring field video data acquired in real time by a monitoring camera in a subway carriage;
carrying out subway seat side line identification real-time processing on the field video data by adopting a subway seat side line identification algorithm to obtain a subway seat side line identification result;
performing passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger;
Judging whether the bottom edge center of the corresponding human body mark frame upwards passes over the subway seat side line according to the subway seat side line identification result in real time for each passenger in the at least one passenger, if so, taking the corresponding passenger as a line-crossing passenger, and intercepting a corresponding passenger image from the on-site video data according to the corresponding human body mark frame;
inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and has been trained in advance for each line-crossing passenger in the at least one passenger, and outputting to obtain a corresponding passenger embracing state recognition result;
and for each line-crossing passenger, if the corresponding passenger in the enclasped state is indicated to be in the non-enclasped state by the corresponding passenger enclasped state identification result, determining that the subway seat stepping action exists for the corresponding passenger, and triggering the corresponding prompting action.
2. The subway seat trampling behavior detection method according to claim 1, wherein the step of performing subway seat edge recognition real-time processing on the field video data by adopting a subway seat edge recognition algorithm to obtain a subway seat edge recognition result comprises the steps of:
converting live video images in the live video data into gray-scale images in real time;
Carrying out Gaussian filtering real-time processing on the gray level image to obtain a denoising image;
performing edge detection real-time processing on the denoising image to obtain an edge detection result image containing edge pixel points, wherein the edge pixel points are used as subway seat border pixel points;
performing mask real-time processing on the edge detection result image according to a preset region of interest to obtain a new edge detection result image which only contains edge pixel points in the region of interest;
performing Hough transformation real-time processing on the new edge detection result image to obtain at least one straight line segment for forming a subway seat side line;
fitting in real time according to the slope average value and the intercept average value of the at least one straight line segment to obtain a continuous subway seat side line;
loading the subway seat side line into the field video image in real time to obtain a subway seat side line identification result.
3. The subway seat trampling behavior detection method according to claim 1, wherein the step of performing subway seat edge recognition real-time processing on the field video data by adopting a subway seat edge recognition algorithm to obtain a subway seat edge recognition result comprises the steps of:
Performing passenger statistics processing on the live video images in the live video data to obtain passenger statistics results;
judging whether the passenger statistical result is smaller than or equal to a preset passenger number threshold value;
if so, a subway seat side line is obtained according to the on-site video image recognition, the subway seat side line is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, otherwise, the subway seat side line obtained according to the previous video image is loaded into the on-site video image in real time to obtain a subway seat side line recognition result, wherein the previous video image is also positioned in the on-site video data and refers to the nearest video image of which the corresponding passenger statistical result is less than or equal to the passenger number threshold and is positioned in front of the on-site video image on a time axis.
4. The subway seat trampling behavior detection method according to claim 1, wherein the performing passenger identification real-time processing on the live video data by using a target detection algorithm to obtain a passenger identification result comprises:
and importing the live video image in the live video data into a passenger identification model which is based on a YOLO v4 target detection algorithm and is trained in advance to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger.
5. The subway seat trampling behavior detection method according to claim 1, wherein for each passenger of the at least one passenger, determining whether a bottom edge center of a corresponding human body mark frame passes over a subway seat side line upward according to the subway seat side line recognition result in real time comprises:
for each passenger in the at least one passenger, determining corresponding bottom edge center coordinates in real time according to the corresponding human body mark frame;
and judging whether the corresponding bottom edge center coordinates are positioned on the upper side of the identified subway seat side line according to the subway seat side line identification result in real time for each passenger, if so, judging that the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line, otherwise, judging that the bottom edge center of the corresponding human body mark frame upwards does not cross the subway seat side line.
6. The subway seat trampling behavior detection method according to claim 1, wherein for each of the at least one passenger, inputting the corresponding passenger image into a passenger embraced state recognition model based on a convolutional neural network and having been trained in advance, and outputting a result of obtaining the corresponding passenger embraced state recognition, comprises:
And inputting a corresponding passenger image into a passenger embracing state recognition model which is based on a depth residual network ResNet_34 and is trained in advance for each line-crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result.
7. The subway seat tread behavior detection method according to claim 1, wherein triggering a corresponding dissuading action for a certain line-crossing passenger for which the subway seat tread behavior has been determined to exist comprises:
determining the position of the human body mark frame of the certain line crossing passenger in the subway carriage according to the position of the human body mark frame of the certain line crossing passenger in the field video image of the field video data;
according to pre-stored arrangement data of the subway voice broadcasting devices, the subway voice broadcasting device closest to the position in the subway carriage of the certain overtaking passenger is found, and when the subway voice broadcasting device is idle, the subway voice broadcasting device is controlled to play an audio file for prompting the stopping of the stepping action of the subway seat.
8. The subway seat trampling behavior detection device is characterized by comprising a data acquisition module, a subway seat side line identification module, a passenger identification module, a line crossing judgment module, a passenger state identification module and a behavior confirmation module;
The data acquisition module is used for acquiring field video data acquired in real time by a monitoring camera in the subway carriage;
the subway seat side line identification module is in communication connection with the data acquisition module and is used for carrying out real-time subway seat side line identification processing on the field video data by adopting a subway seat side line identification algorithm to obtain a subway seat side line identification result;
the passenger identification module is in communication connection with the data acquisition module and is used for carrying out passenger identification real-time processing on the on-site video data by adopting a target detection algorithm to obtain a passenger identification result, wherein the passenger identification result comprises at least one identified passenger and a human body mark frame of each passenger in the at least one passenger;
the line crossing judging module is respectively in communication connection with the subway seat side line identifying module and the passenger identifying module, and is used for judging whether the bottom edge center of the corresponding human body mark frame upwards crosses the subway seat side line according to the subway seat side line identifying result in real time for each passenger in the at least one passenger, if so, the corresponding passenger is taken as a line crossing passenger, and a corresponding passenger image is intercepted from the on-site video data according to the corresponding human body mark frame;
The passenger state recognition module is in communication connection with the line crossing judgment module and is used for inputting corresponding passenger images into a passenger embracing state recognition model which is based on a convolutional neural network and is trained in advance for each line crossing passenger in the at least one passenger, and outputting and obtaining a corresponding passenger embracing state recognition result;
the behavior confirmation module is in communication connection with the passenger state identification module and is used for determining that the subway seat stepping behavior exists for the corresponding passenger and triggering corresponding prompting actions if the corresponding passenger is in the non-embraced state according to the embraced state identification result of the corresponding passenger for each line-crossing passenger.
9. A computer device comprising a memory, a processor and a transceiver in communication connection in sequence, wherein the memory is used for storing a computer program, the transceiver is used for receiving and transmitting messages, and the processor is used for reading the computer program and executing the subway seat trampling behavior detection method according to any one of claims 1 to 7.
10. A computer-readable storage medium having instructions stored thereon that, when executed on a computer, perform the subway seat trampling behavior detection method according to any one of claims 1 to 7.
CN202310217180.8A 2023-03-08 2023-03-08 Subway seat trampling behavior detection method, device, equipment and storage medium Active CN116486324B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310217180.8A CN116486324B (en) 2023-03-08 2023-03-08 Subway seat trampling behavior detection method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310217180.8A CN116486324B (en) 2023-03-08 2023-03-08 Subway seat trampling behavior detection method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN116486324A CN116486324A (en) 2023-07-25
CN116486324B true CN116486324B (en) 2024-02-09

Family

ID=87218454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310217180.8A Active CN116486324B (en) 2023-03-08 2023-03-08 Subway seat trampling behavior detection method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116486324B (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959624A (en) * 2016-05-03 2016-09-21 方筠捷 Examination room monitoring data processing method and automatic monitoring system thereof
CN112070823A (en) * 2020-08-28 2020-12-11 武汉亘星智能技术有限公司 Video identification-based automobile intelligent cabin adjusting method, device and system
KR20220073412A (en) * 2020-11-26 2022-06-03 현대자동차주식회사 Passenger monitoring system and method, and getting on/off recognition method using the same

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11106902B2 (en) * 2018-03-13 2021-08-31 Adobe Inc. Interaction detection model for identifying human-object interactions in image content
US11485383B2 (en) * 2019-12-06 2022-11-01 Robert Bosch Gmbh System and method for detecting and mitigating an unsafe condition in a vehicle

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105959624A (en) * 2016-05-03 2016-09-21 方筠捷 Examination room monitoring data processing method and automatic monitoring system thereof
CN112070823A (en) * 2020-08-28 2020-12-11 武汉亘星智能技术有限公司 Video identification-based automobile intelligent cabin adjusting method, device and system
KR20220073412A (en) * 2020-11-26 2022-06-03 현대자동차주식회사 Passenger monitoring system and method, and getting on/off recognition method using the same

Non-Patent Citations (3)

* Cited by examiner, † Cited by third party
Title
"Learning Sparse Representation With Variational Auto-Encoder for Anomaly Detection";J. Sun等;《IEEE Access》;第6卷;全文 *
基于图像识别的公共图书馆座位检测系统研究;朱云琪;蒋;张轶;;电子世界(第03期);全文 *
安防和乘客异动在途监测系统设计;何晔;奉泽熙;;机车电传动(第04期);全文 *

Also Published As

Publication number Publication date
CN116486324A (en) 2023-07-25

Similar Documents

Publication Publication Date Title
WO2020103893A1 (en) Lane line property detection method, device, electronic apparatus, and readable storage medium
CN110414417A (en) A kind of traffic mark board recognition methods based on multi-level Fusion multi-scale prediction
CN103824081A (en) Method for detecting rapid robustness traffic signs on outdoor bad illumination condition
CN111967396A (en) Processing method, device and equipment for obstacle detection and storage medium
CN111414807A (en) Tidal water identification and crisis early warning method based on YO L O technology
CN116110036B (en) Electric power nameplate information defect level judging method and device based on machine vision
CN113065379B (en) Image detection method and device integrating image quality and electronic equipment
CN113011338A (en) Lane line detection method and system
CN114724063A (en) Road traffic incident detection method based on deep learning
CN116486324B (en) Subway seat trampling behavior detection method, device, equipment and storage medium
CN117132990A (en) Railway carriage information identification method, device, electronic equipment and storage medium
CN115512315B (en) Non-motor vehicle child riding detection method, electronic equipment and storage medium
CN112001336A (en) Pedestrian boundary crossing alarm method, device, equipment and system
CN116311205A (en) License plate recognition method, license plate recognition device, electronic equipment and storage medium
CN116052090A (en) Image quality evaluation method, model training method, device, equipment and medium
CN115953744A (en) Vehicle identification tracking method based on deep learning
CN113139488B (en) Method and device for training segmented neural network
US20210012472A1 (en) Adaptive video subsampling for energy efficient object detection
CN116052440B (en) Vehicle intention plug behavior identification method, device, equipment and storage medium
CN114092818A (en) Semantic segmentation method and device, electronic equipment and storage medium
CN116665140B (en) Method, device, equipment and storage medium for detecting shared single vehicle-mounted human behavior
CN109214434A (en) A kind of method for traffic sign detection and device
CN111401104B (en) Classification model training method, classification method, device, equipment and storage medium
CN115376025A (en) Unmanned aerial vehicle target detection method, system, equipment and storage medium
CN117934417A (en) Method, device, equipment and medium for identifying apparent defects of road based on neural network

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant