CN112528825A - Station passenger recruitment service method based on image recognition - Google Patents

Station passenger recruitment service method based on image recognition Download PDF

Info

Publication number
CN112528825A
CN112528825A CN202011414409.XA CN202011414409A CN112528825A CN 112528825 A CN112528825 A CN 112528825A CN 202011414409 A CN202011414409 A CN 202011414409A CN 112528825 A CN112528825 A CN 112528825A
Authority
CN
China
Prior art keywords
video image
abnormal behavior
channel picture
platform
video
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202011414409.XA
Other languages
Chinese (zh)
Inventor
刘文龙
包峰
闻一龙
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Traffic Control Technology TCT Co Ltd
Original Assignee
Traffic Control Technology TCT Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Traffic Control Technology TCT Co Ltd filed Critical Traffic Control Technology TCT Co Ltd
Priority to CN202011414409.XA priority Critical patent/CN112528825A/en
Publication of CN112528825A publication Critical patent/CN112528825A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/21Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
    • G06F18/214Generating training patterns; Bootstrap methods, e.g. bagging or boosting
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • G06V20/53Recognition of crowd images, e.g. recognition of crowd congestion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

Embodiments of the present disclosure provide a station passenger assistance service method, apparatus, device and computer-readable storage medium based on image recognition. The method comprises acquiring a station video image; identifying the video image through a preset target detection algorithm to obtain a 2-channel picture; inputting the 2-channel picture into a pre-trained behavior recognition model, and determining an abnormal behavior category corresponding to the 2-channel picture; and sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker according to the abnormal behavior type. In this way, real-time detection of the platform abnormal condition and abnormal information pushing are realized, and the working intensity of platform workers is reduced.

Description

Station passenger recruitment service method based on image recognition
Technical Field
Embodiments of the present disclosure relate generally to the field of rail transit, and more particularly, to a station passenger recruitment service method, apparatus, device, and computer-readable storage medium based on image recognition.
Background
The construction and operation of urban rail transit in China are entering into the period of large-scale and networked operation. However, with the continuous expansion of the domestic rail transit operation scale, the passenger flow volume is increased rapidly, and the operation safety of the urban rail transit system faces new challenges
At present, rail transit passenger flow identification research is in a starting stage, safety state judgment still stays in a manual monitoring mode, monitoring images provided by cameras in specific areas are mainly observed by workers, and passenger behavior identification initiative perception capability is not provided. Meanwhile, because of the limitation of the camera, it is difficult to monitor the passenger in all directions.
Therefore, innovation is used as a drive, new technologies such as artificial intelligence, cloud computing, internet of things and big data are widely applied, technical researches such as environment sensing and equipment fusion sharing are carried out on the platform, existing resources are efficiently and comprehensively utilized, a more safe, reliable, economical and efficient sensing system is developed, and the development direction becomes an important development direction at present.
Disclosure of Invention
According to an embodiment of the present disclosure, a station passenger recruitment service scheme based on image recognition is provided.
In a first aspect of the present disclosure, a station passenger recruitment service method based on image recognition is provided. The method comprises the following steps:
acquiring a platform video image;
identifying the video image through a preset target detection algorithm to obtain a 2-channel picture;
inputting the 2-channel picture into a pre-trained behavior recognition model, and determining an abnormal behavior category corresponding to the 2-channel picture;
and sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker according to the abnormal behavior type.
Further, the acquiring the station video image comprises:
and acquiring a platform video image through the existing camera and/or the perception camera.
Further, the behavior recognition model is an SK-CNN convolutional neural network model and comprises an input layer, a convolutional layer, a pooling layer, a first full-connection layer, a second full-connection layer and an output layer.
Further, the behavior recognition model is obtained by training through the following steps:
generating a training sample set, wherein the training sample comprises a 2-channel picture obtained according to the abnormal behavior video and a corresponding abnormal behavior category;
and taking the 2-channel picture as input, taking the abnormal behavior category corresponding to the 2-channel picture as output, and training the behavior recognition model by adopting a softmax activation function.
Further, the generating training samples comprises:
acquiring an abnormal behavior video image;
and identifying the video through a preset target detection algorithm to obtain 2-channel pictures, and labeling corresponding action types.
Further, the identifying the video image through a preset target detection algorithm to obtain a 2-channel picture includes:
acquiring N video frames of the video images according to a preset time interval; n is a positive integer greater than or equal to 1;
performing key point identification on the video frame through a Yolo v3 target detection algorithm to obtain key point pixel coordinates, and simultaneously recording the frame number of the video frame and the key point sequence;
and converting the pixel coordinates of the key points into normalized coordinates according to the preset width and height of the picture, the frame number of the video frames and the key point sequence to obtain a 2-channel picture.
In a second aspect of the present disclosure, there is provided a station passenger recruitment service device based on image recognition. The device includes:
the acquisition module is used for acquiring a platform video image;
the edge calculation module is used for identifying and processing the video image through a preset target detection algorithm to obtain a 2-channel picture;
the training module is used for inputting the 2-channel pictures into a pre-trained behavior recognition model and determining the abnormal behavior categories corresponding to the 2-channel pictures;
and the integrated control module is used for sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker.
In a third aspect of the disclosure, an electronic device is provided. The electronic device includes: a memory having a computer program stored thereon and a processor implementing the method as described above when executing the program.
In a fourth aspect of the present disclosure, a computer readable storage medium is provided, having stored thereon a computer program, which when executed by a processor, implements a method as in accordance with the first aspect of the present disclosure.
According to the station passenger recruitment service method based on image recognition, the platform video image is obtained; identifying the video image through a preset target detection algorithm to obtain a 2-channel picture; inputting the 2-channel picture into a pre-trained behavior recognition model, and determining an abnormal behavior category corresponding to the 2-channel picture; and sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker according to the abnormal behavior type. The real-time detection and abnormal information pushing of the platform abnormal conditions are realized, and the working intensity of platform workers is reduced. Meanwhile, the active sensing capability of the abnormal conditions of the platform is improved, and the riding safety of passengers at the platform is maintained.
It should be understood that the statements herein reciting aspects are not intended to limit the critical or essential features of the embodiments of the present disclosure, nor are they intended to limit the scope of the present disclosure. Other features of the present disclosure will become apparent from the following description.
Drawings
The above and other features, advantages and aspects of various embodiments of the present disclosure will become more apparent by referring to the following detailed description when taken in conjunction with the accompanying drawings. In the drawings, like or similar reference characters designate like or similar elements, and wherein:
FIG. 1 illustrates a schematic diagram of an exemplary operating environment in which embodiments of the present disclosure can be implemented;
fig. 2 illustrates a flowchart of a station passenger call service method based on image recognition according to an embodiment of the present disclosure;
FIG. 3 shows a schematic diagram of a fall exception behavior according to an embodiment of the present disclosure;
fig. 4 illustrates a block diagram of a station passenger call service apparatus based on image recognition according to an embodiment of the present disclosure;
FIG. 5 illustrates a block diagram of an exemplary electronic device capable of implementing embodiments of the present disclosure.
Detailed Description
To make the objects, technical solutions and advantages of the embodiments of the present disclosure more clear, the technical solutions of the embodiments of the present disclosure will be described clearly and completely with reference to the drawings in the embodiments of the present disclosure, and it is obvious that the described embodiments are some, but not all embodiments of the present disclosure. All other embodiments, which can be derived by a person skilled in the art from the embodiments disclosed herein without making any creative effort, shall fall within the protection scope of the present disclosure.
In addition, the term "and/or" herein is only one kind of association relationship describing an associated object, and means that there may be three kinds of relationships, for example, a and/or B, which may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" herein generally indicates that the former and latter related objects are in an "or" relationship.
FIG. 1 illustrates a schematic diagram of an exemplary operating environment 100 in which embodiments of the present disclosure can be implemented. Included in the runtime environment 100 are a client 101, a network 102, and a server 103.
It should be understood that the number of user clients, networks, and servers in FIG. 1 is merely illustrative. There may be any number of terminal devices, networks, and servers, as desired for implementation. In particular, in the case where the target data does not need to be acquired from a remote place, the above system architecture may not include a network but only a terminal device or a server.
Fig. 2 shows a flowchart of a station passenger call service method 200 based on image recognition according to an embodiment of the present disclosure. As shown in fig. 2, the method for the passenger to call for assistance at the station based on image recognition comprises the following steps:
s210, a platform video image is obtained.
In this embodiment, the execution subject (for example, the server shown in fig. 1) of the station passenger assistance service method based on image recognition may acquire the platform video image in a wired manner or a wireless connection manner.
Optionally, for the service coverage condition of the monitoring probe, a sensing camera is additionally added to the platform to realize no dead angle coverage of the detection area. Namely, the existing monitoring probe is reused, meanwhile, a sensing camera is additionally arranged in the uncovered area of the existing probe, and a video image of an area to be detected, such as a platform video image, is acquired through the existing probe and the sensing camera additionally arranged.
Optionally, in the present disclosure:
existing cameras, which can adopt an M12 connector (interface);
the perception camera can adopt an Ethernet interface which is connected by an aviation plug and is provided with PoE power supply.
S220, identifying the video image through a preset target detection algorithm to obtain a 2-channel picture.
Optionally, the N video frames of the video image are acquired according to a preset time interval. The time interval can be set according to an actual application scene, for example, the time interval is set to be 1s, that is, 1 second is used for obtaining 1 time of video frames of the video image; n is a positive integer greater than or equal to 1, for example 18, i.e. the category of the current video image is determined by 18 consecutive video frames.
Optionally, the video frame is identified through a Yolo v3 algorithm and stored as a picture of a preset target frame range. The range of the target frame can be set according to the actual application scene, for example, 18x18, that is, the width and height of the target frame are both 18.
Optionally, extracting key points of the stored picture, where the key points are usually human skeleton key points, obtaining pixel coordinates of the key points, and recording the frame number of the video frame and the key point sequence.
The pixel coordinates of 18 key points in the picture are usually extracted, because in general, the identification of abnormal actions can be completed by 18 key points. Other numbers of key points can be set according to the actual application scene.
Optionally, the extracted pixel coordinates are formulated
Figure BDA0002817309220000061
And converting the image into a normalized coordinate to obtain a 2-channel image. Namely, converting the time sequence information of the key points into 2-channel spatial information;
wherein T represents a frame number;
the N represents a key point sequence;
the width represents the width of the picture:
the height represents the height of the picture.
And S230, inputting the 2-channel picture into a pre-trained behavior recognition model, and determining the abnormal behavior category corresponding to the 2-channel picture.
Optionally, the behavior recognition model may be obtained through training of an SK-CNN convolutional neural network model.
Specifically, a training sample set is generated, where the training samples include 2-channel pictures obtained according to the abnormal behavior video and corresponding abnormal behavior categories.
Further, the abnormal behaviors include falling, waving for help, fighting, entrepreneurship, crowd dispersion behaviors, and the like.
And taking the 2-channel picture as input, taking the abnormal behavior category corresponding to the 2-channel picture as output, and training the behavior recognition model by adopting a softmax activation function through an Nvidia DGX deep learning server.
Alternatively, the training samples may be generated by:
and acquiring video images of different scenes, different crowds and different abnormal behaviors. The method and the device have the advantages that collection is carried out according to different scenes of different crowds, dependence of recognition results on individual crowd recognition is reduced, and model robustness is improved. In practical application, when the video images of abnormal behaviors collected (stored) at the station are few, the video image samples can be collected by simulating the action behaviors on site by personnel and/or collecting big data of the internet and the like.
And identifying the video image through a Yolo v3 target detection algorithm to obtain a 2-channel picture, and labeling the corresponding action type. The process of generating the 2-channel picture may be a corresponding process in step S220, and is not described herein again.
Optionally, the labeling the corresponding action category includes: the land-0, the waving call-1, the fighting-2, the discussion art-3 and the crowd dispersion-4 … … are not listed.
Further, the action category may be determined according to a video frame (e.g., a flip action, an entrepreneur action, etc.) or a plurality of consecutive video frames (which need to be determined by consecutive actions, such as crowd quartering, etc.). Namely, by using a Yolo v3 target detection algorithm, the key point coordinates in the video frame are identified, and the action type of the current video frame is determined according to the key point coordinates.
Optionally, the 2-channel picture is used as an input and input to an SK-CNN convolutional neural network model, so as to obtain an abnormal behavior category (output) corresponding to the 2-channel picture.
Optionally, the SK-CNN convolutional neural network model may include an input layer, a convolutional layer, a pooling layer, a first fully-connected layer, a second fully-connected layer, an output layer, and the like.
Specifically, the method comprises the following steps:
(1) an input layer: the parameter is 18 × 18 × 2, where 18 denotes the picture size (preset) and 2 denotes the picture channel.
(2) And (3) rolling layers: the parameters are 3 × 3, the convolution kernel depth is 6, all 0 padding is not used, the convolution step size is 1, and the output matrix size is 16 × 16 × 6.
(3) A pooling layer: the pooling layer convolution kernel size is 2 × 2, is filled with 0 when not in use, the convolution step is 2, and the output matrix size is 8 × 8 × 6.
(4) And (3) rolling layers: the convolution kernel size is 2 × 2, the convolution kernel depth is 16, all 0 padding is not used, the convolution step size is 1, and the output matrix size is 4 × 4 × 16.
(5) Full connection layer: the number of fully connected neurons was 120.
(6) Full connection layer: the number of fully connected neurons was 64.
(7) An output layer: the output node is 5, representing 5 abnormal behavior categories (falling to the ground-0, waving for help-1, fighting-2, discussion-3, crowd scattered behavior-4).
And S240, according to the abnormal behavior category, sending the corresponding abnormal behavior picture to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker.
Optionally, according to the abnormal behavior category, the video image corresponding to the abnormal behavior category is sent to a platform cloud center (cloud end), an alarm display large screen (comprehensive control room display screen) and/or a mobile terminal (platform personnel handheld terminal) in a wired and/or wireless mode.
Alternatively, the video image may be one or more abnormal behavior pictures (video frames), or may be a video composed of a plurality of consecutive video frames (consecutive abnormal behavior actions).
For example, if the abnormal behavior is a falling place, the video frame (refer to fig. 3) corresponding to the falling place behavior is directly sent to the platform personnel handheld terminal through the 4G network.
According to the embodiment of the disclosure, the following technical effects are achieved:
the existing monitoring probe and the sensing camera of the platform are used for completing image acquisition, the deep learning technology is combined, and the algorithm and the model of abnormal behavior detection are used for realizing behavior identification and alarm of station passengers such as falling down (falling down), calling for help and the like. And sending the recognition result to a display screen of the platform comprehensive control room and a handheld terminal of platform staff for displaying. The industry application blank of platform passenger behavior perception based on the image recognition technology is filled, and the passenger waiting safety is improved.
It is noted that while for simplicity of explanation, the foregoing method embodiments have been described as a series of acts or combination of acts, it will be appreciated by those skilled in the art that the present disclosure is not limited by the order of acts, as some steps may, in accordance with the present disclosure, occur in other orders and concurrently. Further, those skilled in the art should also appreciate that the embodiments described in the specification are exemplary embodiments and that acts and modules referred to are not necessarily required by the disclosure.
The above is a description of embodiments of the method, and the embodiments of the apparatus are further described below.
Fig. 4 shows a block diagram of a station passenger recruitment service device 400 based on image recognition according to an embodiment of the disclosure. As shown in fig. 4, the apparatus 400 includes:
an obtaining module 410, configured to obtain a platform video image;
the edge calculation module 420 is configured to perform recognition processing on the video image through a preset target detection algorithm to obtain a 2-channel picture;
the training module 430 is configured to input the 2-channel picture into a pre-trained behavior recognition model, and determine an abnormal behavior category corresponding to the 2-channel picture;
and the integrated control module 440 is used for sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker according to the abnormal behavior type.
It can be clearly understood by those skilled in the art that, for convenience and brevity of description, the specific working process of the described module may refer to the corresponding process in the foregoing method embodiment, and is not described herein again.
FIG. 5 shows a schematic block diagram of an electronic device 500 that may be used to implement embodiments of the present disclosure. As shown, device 500 includes a Central Processing Unit (CPU)501 that may perform various appropriate actions and processes in accordance with computer program instructions stored in a Read Only Memory (ROM)502 or loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The CPU 501, ROM 502, and RAM503 are connected to each other via a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
A number of components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, or the like; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508, such as a magnetic disk, optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices through a computer network such as the internet and/or various telecommunication networks.
The processing unit 501 performs the various methods and processes described above. For example, in some embodiments, the methods may be implemented as a computer software program tangibly embodied in a machine-readable medium, such as storage unit 508. In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM 502 and/or the communication unit 509. When the computer program is loaded into RAM503 and executed by CPU 501, one or more steps of the method described above may be performed. Alternatively, in other embodiments, CPU 501 may be configured to perform the method by any other suitable means (e.g., by way of firmware).
The functions described herein above may be performed, at least in part, by one or more hardware logic components. For example, without limitation, exemplary types of hardware logic components that may be used include: a Field Programmable Gate Array (FPGA), an Application Specific Integrated Circuit (ASIC), an Application Specific Standard Product (ASSP), a system on a chip (SOC), a load programmable logic device (CPLD), and the like.
Program code for implementing the methods of the present disclosure may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable data processing apparatus, such that the program codes, when executed by the processor or controller, cause the functions/operations specified in the flowchart and/or block diagram to be performed. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. A machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
Further, while operations are depicted in a particular order, this should be understood as requiring that such operations be performed in the particular order shown or in sequential order, or that all illustrated operations be performed, to achieve desirable results. Under certain circumstances, multitasking and parallel processing may be advantageous. Likewise, while several specific implementation details are included in the above discussion, these should not be construed as limitations on the scope of the disclosure. Certain features that are described in the context of separate embodiments can also be implemented in combination in a single implementation. Conversely, various features that are described in the context of a single implementation can also be implemented in multiple implementations separately or in any suitable subcombination.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims.
Although the subject matter has been described in language specific to structural features and/or methodological acts, it is to be understood that the subject matter defined in the appended claims is not necessarily limited to the specific features or acts described above. Rather, the specific features and acts described above are disclosed as example forms of implementing the claims, and the scope of the invention is not limited thereto, as modifications and substitutions may be readily made by those skilled in the art without departing from the spirit and scope of the invention as disclosed herein.

Claims (9)

1. A station passenger recruitment service method based on image recognition is characterized by comprising the following steps:
acquiring a platform video image;
identifying the video image through a preset target detection algorithm to obtain a 2-channel picture;
inputting the 2-channel picture into a pre-trained behavior recognition model, and determining an abnormal behavior category corresponding to the 2-channel picture;
and sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker according to the abnormal behavior type.
2. The method of claim 1, wherein the obtaining the station video image comprises:
and acquiring a platform video image through the existing camera and/or the perception camera.
3. The method of claim 2, wherein the behavior recognition model is an SK-CNN convolutional neural network model comprising an input layer, a convolutional layer, a pooling layer, a first fully-connected layer, a second fully-connected layer, and an output layer.
4. The method of claim 3, wherein the behavior recognition model is trained by:
generating a training sample set, wherein the training sample comprises a 2-channel picture obtained according to the abnormal behavior video and a corresponding abnormal behavior category;
and taking the 2-channel picture as input, taking the abnormal behavior category corresponding to the 2-channel picture as output, and training the behavior recognition model by adopting a softmax activation function.
5. The method of claim 4, wherein generating training samples comprises:
acquiring an abnormal behavior video image;
and identifying the video image through a preset target detection algorithm to obtain a 2-channel picture, and labeling the corresponding action type.
6. The method according to claim 5, wherein the identifying the video image by a preset target detection algorithm to obtain a 2-channel picture comprises:
acquiring N video frames of the video images according to a preset time interval; n is a positive integer greater than or equal to 1;
performing key point identification on the video frame through a Yolo v3 target detection algorithm to obtain key point pixel coordinates, and simultaneously recording the frame number of the video frame and the key point sequence;
and converting the pixel coordinates of the key points into normalized coordinates according to the preset width and height of the picture, the frame number of the video frames and the key point sequence to obtain a 2-channel picture.
7. A station passenger recruitment service device based on image recognition is characterized by comprising:
the acquisition module is used for acquiring a platform video image;
the edge calculation module is used for identifying and processing the video image through a preset target detection algorithm to obtain a 2-channel picture;
the training module is used for inputting the 2-channel pictures into a pre-trained behavior recognition model and determining the abnormal behavior categories corresponding to the 2-channel pictures;
and the integrated control module is used for sending the video image corresponding to the abnormal behavior type to a display screen of a comprehensive control room, a cloud end and/or a handheld terminal of a platform worker.
8. An electronic device comprising a memory and a processor, the memory having stored thereon a computer program, wherein the processor, when executing the program, implements the method of any of claims 1-6.
9. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, carries out the method of any one of claims 1 to 6.
CN202011414409.XA 2020-12-04 2020-12-04 Station passenger recruitment service method based on image recognition Pending CN112528825A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011414409.XA CN112528825A (en) 2020-12-04 2020-12-04 Station passenger recruitment service method based on image recognition

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011414409.XA CN112528825A (en) 2020-12-04 2020-12-04 Station passenger recruitment service method based on image recognition

Publications (1)

Publication Number Publication Date
CN112528825A true CN112528825A (en) 2021-03-19

Family

ID=74997824

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011414409.XA Pending CN112528825A (en) 2020-12-04 2020-12-04 Station passenger recruitment service method based on image recognition

Country Status (1)

Country Link
CN (1) CN112528825A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113974998A (en) * 2021-07-08 2022-01-28 北京理工华汇智能科技有限公司 Intelligent walking aid and consciousness state monitoring method and system

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263623A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Train climbs monitoring method, device, terminal and storage medium
CN110264651A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Railway platform pedestrian gets over line monitoring method, device, terminal and storage medium
CN112001514A (en) * 2020-08-19 2020-11-27 交控科技股份有限公司 Intelligent passenger service system
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263623A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Train climbs monitoring method, device, terminal and storage medium
CN110264651A (en) * 2019-05-07 2019-09-20 平安科技(深圳)有限公司 Railway platform pedestrian gets over line monitoring method, device, terminal and storage medium
CN112001514A (en) * 2020-08-19 2020-11-27 交控科技股份有限公司 Intelligent passenger service system
CN112016528A (en) * 2020-10-20 2020-12-01 成都睿沿科技有限公司 Behavior recognition method and device, electronic equipment and readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
周成尧 等: "城市轨道交通智能列车乘客服务系统研究", 《现代城市轨道交通》, no. 8, 31 August 2020 (2020-08-31), pages 20 - 25 *
孙宝聪: "基于图像检测的机场人员异常行为分析技术研究", 《数字通信世界》, no. 1, 31 January 2020 (2020-01-31), pages 36 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113974998A (en) * 2021-07-08 2022-01-28 北京理工华汇智能科技有限公司 Intelligent walking aid and consciousness state monitoring method and system

Similar Documents

Publication Publication Date Title
CN112950773A (en) Data processing method and device based on building information model and processing server
CN112449147B (en) Video cluster monitoring system of photovoltaic power station and image processing method thereof
CN111191507A (en) Safety early warning analysis method and system for smart community
CN113888514A (en) Method and device for detecting defects of ground wire, edge computing equipment and storage medium
CN113284144B (en) Tunnel detection method and device based on unmanned aerial vehicle
US20230017578A1 (en) Image processing and model training methods, electronic device, and storage medium
CN114332977A (en) Key point detection method and device, electronic equipment and storage medium
CN115620208A (en) Power grid safety early warning method and device, computer equipment and storage medium
CN113723184A (en) Scene recognition system, method and device based on intelligent gateway and intelligent gateway
CN115205780A (en) Construction site violation monitoring method, system, medium and electronic equipment
CN113887318A (en) Embedded power violation detection method and system based on edge calculation
CN111860187A (en) High-precision worn mask identification method and system
CN115082813A (en) Detection method, unmanned aerial vehicle, detection system and medium
CN110390226B (en) Crowd event identification method and device, electronic equipment and system
CN114332925A (en) Method, system and device for detecting pets in elevator and computer readable storage medium
CN112528825A (en) Station passenger recruitment service method based on image recognition
CN113505704A (en) Image recognition personnel safety detection method, system, equipment and storage medium
CN111310595B (en) Method and device for generating information
CN112215567A (en) Production flow compliance checking method and system, storage medium and terminal
CN112529836A (en) High-voltage line defect detection method and device, storage medium and electronic equipment
CN115190277B (en) Safety monitoring method, device and equipment for construction area and storage medium
CN115083229B (en) Intelligent recognition and warning system of flight training equipment based on AI visual recognition
Chang et al. Safety risk assessment of electric power operation site based on variable precision rough set
CN115641607A (en) Method, device, equipment and storage medium for detecting wearing behavior of power construction site operator
CN115690496A (en) Real-time regional intrusion detection method based on YOLOv5

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination