CN110135345A - Activity recognition method, apparatus, equipment and storage medium based on deep learning - Google Patents

Activity recognition method, apparatus, equipment and storage medium based on deep learning Download PDF

Info

Publication number
CN110135345A
CN110135345A CN201910405370.6A CN201910405370A CN110135345A CN 110135345 A CN110135345 A CN 110135345A CN 201910405370 A CN201910405370 A CN 201910405370A CN 110135345 A CN110135345 A CN 110135345A
Authority
CN
China
Prior art keywords
information
activity recognition
deep learning
static
walking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201910405370.6A
Other languages
Chinese (zh)
Inventor
刘江锋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Wuhan Vertically And Horizontally Wisdom City Ltd By Share Ltd
Original Assignee
Wuhan Vertically And Horizontally Wisdom City Ltd By Share Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Wuhan Vertically And Horizontally Wisdom City Ltd By Share Ltd filed Critical Wuhan Vertically And Horizontally Wisdom City Ltd By Share Ltd
Priority to CN201910405370.6A priority Critical patent/CN110135345A/en
Publication of CN110135345A publication Critical patent/CN110135345A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V40/00Recognition of biometric, human-related or animal-related patterns in image or video data
    • G06V40/20Movements or behaviour, e.g. gesture recognition
    • GPHYSICS
    • G08SIGNALLING
    • G08GTRAFFIC CONTROL SYSTEMS
    • G08G1/00Traffic control systems for road vehicles
    • G08G1/01Detecting movement of traffic to be counted or controlled
    • G08G1/017Detecting movement of traffic to be counted or controlled identifying vehicles
    • G08G1/0175Detecting movement of traffic to be counted or controlled identifying vehicles by photographing vehicles, e.g. when violating traffic rules

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Mathematical Physics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Biomedical Technology (AREA)
  • Artificial Intelligence (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Biophysics (AREA)
  • Software Systems (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Psychiatry (AREA)
  • Social Psychology (AREA)
  • Human Computer Interaction (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a kind of Activity recognition method, apparatus, equipment and storage medium based on deep learning, which comprises pass through the walking behavioural information that pedestrian on default section is acquired in the preset time range before red light is bright;Extract the current static image information and current light stream image information in the walking behavioural information;Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, obtains static behavior recognition result;Activity recognition is carried out to the current light stream image information by the dynamic image Activity recognition model of deep learning, obtains dynamic behaviour recognition result;Activity recognition is carried out to the walking behavioural information according to the static behavior recognition result and dynamic behaviour recognition result;Activity recognition is carried out to the walking behavioural information according to the static behavior recognition result and dynamic behaviour recognition result, to improve Activity recognition accuracy.

Description

Activity recognition method, apparatus, equipment and storage medium based on deep learning
Technical field
The present invention relates to field of computer technology more particularly to a kind of Activity recognition method, apparatus based on deep learning, Equipment and storage medium.
Background technique
Currently, video identification monitoring and law enforcement that the management system of traffic administration is illegal mainly for motor vehicle violation, but There are no effectively being enforced the law to the illegal monitoring management measure of pedestrians disobeying traffic rule and facility, with wisdom traffic management system Specialized and fining, traffic police gradually will carry out specific aim using proprietary technology to the phenomenon that pedestrian running red light etc. is broken rules and regulations and hold Method management, but Activity recognition is carried out to pedestrian during candid photograph and the larger problem of error still occurs.
Summary of the invention
It is a primary object of the present invention to propose a kind of Activity recognition method, apparatus based on deep learning, equipment and deposit Storage media, it is intended to improve the accuracy of pedestrian behavior identification.
To achieve the above object, the present invention provides a kind of Activity recognition method based on deep learning, described to be based on depth The Activity recognition method of study the following steps are included:
The walking behavioural information of pedestrian on default section is acquired in the preset time range before red light is bright;
Extract the current static image information and current light stream image information in the walking behavioural information;
Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, Obtain static behavior recognition result;
Activity recognition is carried out to the current light stream image information by the dynamic image Activity recognition model of deep learning, Obtain dynamic behaviour recognition result;
Behavior is carried out to the walking behavioural information according to the static behavior recognition result and dynamic behaviour recognition result Identification.
Preferably, the walking behavior letter of pedestrian on default section is acquired in the preset time range before red light is bright Before breath, the method also includes:
Connection is established with traffic-light control device, obtains the current of the traffic-light control device transmission in real time upon connection The operation information of traffic lights;
It is executed according to the operation information and acquires pedestrian on default section in the preset time range before red light is bright The step of behavioural information.
Preferably, the current static image information and current light stream image information extracted in the walking behavioural information Before, the method also includes:
Obtain the walking speed information of the pedestrian;
The walking speed information is compared with pre-set velocity threshold value, is obtained in the pedestrian according to comparison result Abnormal walking speed information;
The walking behavioural information is adjusted according to the exception walking speed information corresponding goal behavior information;
Correspondingly, the current static image information extracted in the walking behavioural information and current light stream image letter Breath, comprising:
Extract the current static image information and current light stream image information in walking behavioural information adjusted.
Preferably, the current static image information extracted in walking behavioural information adjusted and current light stream image Information, comprising:
The tristimulus image for extracting each frame in walking behavioural information adjusted, using the tristimulus image as institute State current static image information;
Bias estimation information of each of the walking behavioural information adjusted pixel on time dimension is extracted, by institute Bias estimation information is stated as the current light stream image information.
Preferably, the still image Activity recognition model by deep learning to the current static image information into Row Activity recognition, before obtaining static behavior recognition result, the method also includes:
History static image information is obtained, the static nature information in the history static image information is extracted, it will be described Static nature information generates multidimensional static vector information, and by the multidimensional static vector information input into convolutional neural networks It is trained, obtains still image Activity recognition model;
The dynamic image Activity recognition model by deep learning carries out behavior to the current light stream image information Identification, before obtaining dynamic behaviour recognition result, the method also includes:
Historical movement map is obtained as information, extracts the historical movement map as the dynamic feature information in information, it will be described Dynamic feature information generates Dynamic and Multi dimensional vector information, and the Dynamic and Multi dimensional vector information is input in convolutional neural networks It is trained, obtains dynamic image Activity recognition model.
Preferably, according to the static behavior recognition result and dynamic behaviour recognition result to the walking behavioural information into After row Activity recognition, the method also includes:
The static behavior recognition result and dynamic behaviour recognition result are merged, and calculate being averaged for fusion results Value;
When the average value reaches default decision threshold, using the corresponding behavior outcome of the fusion results as target line For result;
Early warning is carried out when the goal behavior result belongs to abnormal behaviour information.
Preferably, it is described when the goal behavior result belongs to abnormal behaviour information carry out early warning after, the method Further include:
When the goal behavior result belongs to abnormal behaviour information, the corresponding pedestrian's letter of the goal behavior result is obtained Breath;
Default abnormal label information, and the pedestrian information that will put on default abnormal label information are put on to the pedestrian information It is saved.
In addition, to achieve the above object, the present invention also proposes a kind of Activity recognition device based on deep learning, the base Include: in the Activity recognition device of deep learning
Detection module, for acquiring the walking behavior of pedestrian on default section in the preset time range before red light is bright Information;
Extraction module, for extracting current static image information and current light stream image letter in the walking behavioural information Breath;
Static identification module, for the still image Activity recognition model by deep learning to the current static image Information carries out Activity recognition, obtains static behavior recognition result;
Dynamic Recognition module, for the dynamic image Activity recognition model by deep learning to the current light stream image Information carries out Activity recognition, obtains dynamic behaviour recognition result;
Identification module is used for according to the static behavior recognition result and dynamic behaviour recognition result to the walking behavior Information carries out Activity recognition.
In addition, to achieve the above object, the present invention also proposes a kind of Activity recognition equipment based on deep learning, the base Include: memory, processor in the Activity recognition equipment of deep learning and is stored on the memory and can be in the processing The Activity recognition program based on deep learning run on device, the Activity recognition program based on deep learning are arranged for carrying out The step of Activity recognition method based on deep learning as described above.
In addition, to achieve the above object, the present invention also proposes a kind of storage medium, it is stored with and is based on the storage medium The Activity recognition program of deep learning is realized when the Activity recognition program based on deep learning is executed by processor as above The step of described Activity recognition method based on deep learning.
Activity recognition method proposed by the present invention based on deep learning, passes through the preset time range before red light is bright The interior walking behavioural information for acquiring pedestrian on default section;Extract current static image information in the walking behavioural information and Current light stream image information;The current static image information is carried out by the still image Activity recognition model of deep learning Activity recognition obtains static behavior recognition result;By the dynamic image Activity recognition model of deep learning to the current light Stream picture information carries out Activity recognition, obtains dynamic behaviour recognition result;According to the static behavior recognition result and dynamic row Activity recognition is carried out to the walking behavioural information for recognition result;Known according to the static behavior recognition result and dynamic behaviour Other result carries out Activity recognition to the walking behavioural information, to be known by behavioural information of the deep learning to pedestrian Not, judge whether pedestrian will appear the behavior made a dash across the red light and prejudged in advance, achieve the purpose that Activity recognition accuracy.
Detailed description of the invention
Fig. 1 is the device structure schematic diagram for the hardware running environment that the embodiment of the present invention is related to;
Fig. 2 is that the present invention is based on the flow diagrams of the Activity recognition method first embodiment of deep learning;
Fig. 3 is that the present invention is based on pedestrian's identifying system structural representations of one embodiment of Activity recognition method of deep learning Figure;
Fig. 4 is that the present invention is based on the pedestrian behaviors of one embodiment of Activity recognition method of deep learning to capture schematic diagram;
Fig. 5 is that the present invention is based on the flow diagrams of the Activity recognition method second embodiment of deep learning;
Fig. 6 is that the present invention is based on the flow diagrams of the Activity recognition method 3rd embodiment of deep learning;
Fig. 7 is that the present invention is based on the functional block diagrams of the Activity recognition device first embodiment of deep learning.
The embodiments will be further described with reference to the accompanying drawings for the realization, the function and the advantages of the object of the present invention.
Specific embodiment
It should be appreciated that the specific embodiments described herein are merely illustrative of the present invention, it is not intended to limit the present invention.
Referring to Fig.1, Fig. 1 is that the behavior based on deep learning for the hardware running environment that the embodiment of the present invention is related to is known Other device structure schematic diagram.
As shown in Figure 1, the equipment may include: processor 1001, such as CPU, communication bus 1002, user interface 1003, network interface 1004, memory 1005.Wherein, communication bus 1002 is for realizing the connection communication between these components. User interface 1003 may include display screen (Display), input unit such as key, and optional user interface 1003 can also wrap Include standard wireline interface and wireless interface.Network interface 1004 optionally may include standard wireline interface and wireless interface (such as WI-FI interface).Memory 1005 can be high speed RAM memory, be also possible to stable memory (non-volatile ), such as magnetic disk storage memory.Memory 1005 optionally can also be the storage dress independently of aforementioned processor 1001 It sets.
It will be understood by those skilled in the art that device structure shown in Fig. 1 does not constitute the restriction to equipment, can wrap It includes than illustrating more or fewer components, perhaps combines certain components or different component layouts.
As shown in Figure 1, as may include operating system, network communication mould in a kind of memory 1005 of storage medium Block, Subscriber Interface Module SIM and the Activity recognition program based on deep learning.
In equipment shown in Fig. 1, network interface 1004 is mainly used for connecting outer net, carries out data with other network equipments Communication;User interface 1003 is mainly used for connecting user equipment, carries out data communication with equipment;Present device passes through processor The Activity recognition program based on deep learning stored in 1001 calling memories 1005, and execute provided in an embodiment of the present invention The implementation method of Activity recognition based on deep learning.
Based on above-mentioned hardware configuration, propose that the present invention is based on the Activity recognition embodiments of the method for deep learning.
It is that the present invention is based on the flow diagrams of the Activity recognition method first embodiment of deep learning referring to Fig. 2, Fig. 2.
In the first embodiment, the Activity recognition method based on deep learning the following steps are included:
Step S10 acquires the walking behavioural information of pedestrian on default section in the preset time range before red light is bright.
It should be noted that the executing subject of the present embodiment is the Activity recognition based on deep learning being arranged on road Equipment, can also be other terminal devices, the present embodiment to this with no restriction, in the present embodiment, with the row based on deep learning To be illustrated for identification device, the preset time can also be able to be other times parameter in the 30s before red light is bright Information, the present embodiment to this with no restriction, by before red light is bright acquire pedestrian behavioural information, and according to behavioural information into Row prejudges in advance, to more effectively predict the pedestrian information that will be made a dash across the red light, and carries out early warning, believes to realize pedestrian The deep learning of breath.
In the present embodiment, equipped with the Activity recognition system based on deep learning, as shown in figure 3, this system is by front terminal Acquisition system and rear end management subsystem two parts composition, i.e. front-end collection system and back-end processing system, for realizing satisfying the need The automatic candid photograph of the traffic violations such as mouth pedestrian, record, is transmitted and is handled identification, and it is logical that simultaneity factor is also equipped with record in real time Row non power driven vehicle informational function.
In the concrete realization, front-end collection system is responsible for completing video, signal, processing, storage and the upload of front end data, Mainly it is made of components such as video capture unit, traffic signals unit and control execution units, wherein video capture unit, Traffic signals unit is connect with control execution unit crossing respectively, and traffic offence information is transmitted using network mode, rear end management System is responsible for realizing convergence, processing, storage, application, management to acquisition data in region and sharing, main including sequentially connecting Information taken unit, information process unit and information memory cell.
In order to obtain the behavioural information of pedestrian, as shown in figure 4, A indicates control execution unit, B indicates traffic lights list Member, C indicate video capture unit, and the behavioural information of pedestrian is acquired by video capture unit, and the acquisition of traffic lights unit is worked as The operating condition of preceding traffic lights, control execution unit is for carrying out data processing.
Step S20 extracts current static image information and current light stream image information in the walking behavioural information.
It should be noted that behavior is made of two aspects of display form and dynamic change, in order to make full use of behavior Two kinds of information, i.e. the variation of the form and behavioral formation of behavior, the Activity recognition method based on 2-stream, in this method Under, video will turn to two kinds of data modes, i.e. red pigment Green Blue image stream and light stream image stream.
Step S30 carries out the current static image information by the still image Activity recognition model of deep learning Activity recognition obtains static behavior recognition result.
Step S40 carries out the current light stream image information by the dynamic image Activity recognition model of deep learning Activity recognition obtains dynamic behaviour recognition result.
In the present embodiment, by designing two kinds of depth convolutional networks, pre-training is done on ImageNet image data set Afterwards, it is finely adjusted training in the tristimulus image data of video set and light stream image data respectively, this makes a network special The morphological feature of door learning behavior, the dynamic change characterization of the special learning behavior of another network.
Step S50, according to the static behavior recognition result and dynamic behaviour recognition result to the walking behavioural information Carry out Activity recognition.
In the concrete realization, in the behavior of identification, the differentiation of comprehensive two parallel models responds as final identification, By merging the static behavior recognition result and dynamic behaviour recognition result, goal behavior is obtained according to fusion results Recognition result carries out early warning when the goal behavior result belongs to abnormal behaviour information.
It should be noted that the mode of the early warning can also pass through its other party to carry out early warning by way of sound Formula carries out early warning, and the present embodiment with no restriction, in the present embodiment, carries out early warning in a manner of sound early warning to this.
The present embodiment presets section uplink through the above scheme, by acquiring in the preset time range before red light is bright The walking behavioural information of people;Extract the current static image information and current light stream image information in the walking behavioural information; Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, obtains static state Activity recognition result;Behavior is carried out to the current light stream image information by the dynamic image Activity recognition model of deep learning Identification, obtains dynamic behaviour recognition result;According to the static behavior recognition result and dynamic behaviour recognition result to the step Row behavioural information carries out Activity recognition;According to the static behavior recognition result and dynamic behaviour recognition result to the walking row Activity recognition, which is carried out, for information judges whether pedestrian can go out to identify by behavioural information of the deep learning to pedestrian The behavior now made a dash across the red light is prejudged in advance, achievees the purpose that Activity recognition accuracy.
In one embodiment, as shown in figure 5, proposing that the present invention is based on the Activity recognitions of deep learning based on first embodiment Method second embodiment, in the present embodiment, current road segment are equipped with traffic light signal detector, described before the step S10 Method further include:
Connection is established with traffic-light control device, obtains the current of the traffic-light control device transmission in real time upon connection The operation information of traffic lights.
It should be noted that the traffic-light control device can be center controller, by being connected with master controller It connects, the display situation of traffic lights can be obtained, to be acquired before red light is bright to the travel information of pedestrian, realize in red light To pedestrian, whether the behavior of making a dash across the red light is effectively predicted before bright.
The step S10 is executed according to the operation information.
In one embodiment, before the step S20, the method also includes:
Step S201 obtains the walking speed information of the pedestrian.
In the present embodiment, it before the behavioural information of acquisition pedestrian, needs to carry out preliminary screening to the object of acquisition, i.e., Pedestrian is screened by the walking speed of pedestrian, so that the faster pedestrian of walking speed is filtered out, general such pedestrian It is easy to appear that the probability to make a dash across the red light is larger, then obtains the walking behavioural information for being easy to appear the pedestrian to make a dash across the red light, reach raising and know The purpose of other accuracy.
The walking speed information is compared with pre-set velocity threshold value, obtains institute according to comparison result by step S202 State the abnormal walking speed information in pedestrian.
It should be noted that the pre-set velocity threshold value can be 5m/s, it can be also other parameters information, the present embodiment is to this With no restriction, in the present embodiment, it is illustrated, the walking speed information is compared with 5m/s, such as by taking 5m/s as an example Fruit walking speed information be greater than 5m/s, show that the walking speed of pedestrian is very fast, can using this pedestrian as exception walking pedestrian, In addition if walking speed information is less than 5m/s, show that the walking speed of pedestrian is unlikely to too fast, it can be using this pedestrian as just The pedestrian of Chang Buhang does not need the behavioural information for acquiring this kind of pedestrian.
Step S203, according to the corresponding goal behavior information of the exception walking speed information to the walking behavioural information It is adjusted.
In the concrete realization, the pedestrian of abnormal walking is obtained by velocity estimated, only needs to protect in the walking behavioural information The walking behavioural information for staying the pedestrian of abnormal walking does not need the walking behavioural information for retaining the pedestrian of normal gait, reduces number According to the quantity of collection, to reduce the pressure of the processing of data.
Correspondingly, the step S20, comprising:
Step S204 extracts current static image information and current light stream image letter in walking behavioural information adjusted Breath.
Further, the step S204, comprising:
The tristimulus image for extracting each frame in walking behavioural information adjusted, using the tristimulus image as institute State current static image information;Offset of each of walking behavioural information adjusted pixel on time dimension is extracted to estimate Information is counted, using the bias estimation information as the current light stream image information.
It is understood that the characteristic information of behavior should be extracted from both raw informations, the display form of behavior by The red green of each frame blue (Red Green Blue, RGB) image determines, and the variation of form can by estimation Lai Performance, light stream embody bias estimation of each pixel on time dimension in video image, and which reflects video contents Movement, to realize the feature extraction to static image information and dynamic image data.
Scheme provided in this embodiment, by extract current static image information in walking behavioural information adjusted and Current light stream image information, to realize the identification to pedestrian behavior information with two aspect of dynamic by static.
In one embodiment, as shown in fig. 6, proposing that the present invention is based on depth based on the first embodiment or the second embodiment The Activity recognition method 3rd embodiment of habit is illustrated based on first embodiment in the present embodiment, the step S30 it Before, the method also includes:
In order to predict based on deep learning pedestrian behavior, institute can be extracted by obtaining history static image information The static nature information in history static image information is stated, the static nature information is generated into multidimensional static vector information, and The multidimensional static vector information input is trained into convolutional neural networks, obtains still image Activity recognition model, To be trained based on convolutional neural networks, still image Activity recognition model is obtained.
Due to the difference of the identification of still image and dynamic image, the present embodiment design is respectively for still image and dynamic The model that image is predicted obtains historical movement map as information, goes through described in extraction before identifying for dynamic image The dynamic feature information is generated Dynamic and Multi dimensional vector information by the dynamic feature information in history dynamic image data, and by institute It states Dynamic and Multi dimensional vector information and is input in convolutional neural networks and be trained, obtain dynamic image Activity recognition model, thus It is trained based on convolutional neural networks, obtains dynamic image Activity recognition model.
It is adopted it should be noted that data can be carried out by behavioural information of the video capture unit to the jaywalker of candid photograph Collection, is learnt by the sample information of acquisition, to obtain history static image information and historical movement map as information.
Further, after the step S50, the method also includes:
The static behavior recognition result and dynamic behaviour recognition result are merged, and calculate fusion by step S501 As a result average value.
It should be noted that the static behavior recognition result and dynamic behaviour recognition result can be the similarity letter of identification Breath and corresponding behavioural information, such as similarity are 60% etc., can be by knowing static behavior recognition result and dynamic behaviour The similarity information of other result is averaged, to obtain final recognition result.
Step S502, when the average value reaches default decision threshold, by the corresponding behavior outcome of the fusion results As goal behavior result.
It is understood that the default decision threshold can be 70%, it can be also other parameters information, the present embodiment is to this With no restriction, in the present embodiment, it is illustrated for 70%.
In the concrete realization, when the average value reaches 70%, using the corresponding behavior outcome of the fusion results as As a result, the corresponding Activity recognition result of such as average value is behavior of making a phone call, then that identifies makes a phone call to go the goal behavior For goal behavior as a result, correspondingly, when the average value is not up to 70%, by the corresponding behavior outcome weight of the fusion results It is newly identified, until obtained average value is not up to 70% or more, to realize the Activity recognition of pedestrian.
Step S503 carries out early warning when the goal behavior result belongs to abnormal behaviour information.
Further, after the step S503, the method also includes:
In order to realize the record to pedestrian behavior information, when the goal behavior result belongs to abnormal behaviour information, obtain The corresponding pedestrian information of the goal behavior result is taken, default abnormal label information is put on to the pedestrian information, and will put on The pedestrian information of default exception label information is saved, to realize the identification to pedestrian, gets current row in next time It can be paid close attention to emphatically when people, predetermined level information be put on by the label information of pedestrian, to obtain rushing for pedestrian faster Red light probability.
Scheme provided in this embodiment is put on default abnormal label information to the pedestrian information with abnormal behaviour, is realized It to the record of pedestrian, can quickly be analyzed by label information when carrying out the Activity recognition of pedestrian, reach and improve data effect The purpose of rate.
The present invention further provides a kind of Activity recognition device based on deep learning.
It is that the present invention is based on the signals of the functional module of the Activity recognition device first embodiment of deep learning referring to Fig. 7, Fig. 7 Figure.
The present invention is based in the Activity recognition device first embodiment of deep learning, it is somebody's turn to do the Activity recognition based on deep learning Device includes:
Detection module 10, for acquiring the walking row of pedestrian on default section in the preset time range before red light is bright For information.
It should be noted that can also be able to be other times in the 30s before red light is bright in the preset time range Parameter information, the present embodiment with no restriction, by acquiring the behavioural information of pedestrian before red light is bright, and believe this according to behavior Breath is prejudged in advance, to more effectively predict the pedestrian information that will be made a dash across the red light, and carries out early warning, to realize to row The deep learning of people's information.
In the present embodiment, equipped with the Activity recognition system based on deep learning, as shown in figure 3, this system is by front terminal Acquisition system and rear end management subsystem two parts composition, i.e. front-end collection system and back-end processing system, for realizing satisfying the need The automatic candid photograph of the traffic violations such as mouth pedestrian, record, is transmitted and is handled identification, and it is logical that simultaneity factor is also equipped with record in real time Row non power driven vehicle informational function.
In the concrete realization, front-end collection system is responsible for completing video, signal, processing, storage and the upload of front end data, Mainly it is made of components such as video capture unit, traffic signals unit and control execution units, wherein video capture unit, Traffic signals unit is connect with control execution unit crossing respectively, and traffic offence information is transmitted using network mode, rear end management System is responsible for realizing convergence, processing, storage, application, management to acquisition data in region and sharing, main including sequentially connecting Information taken unit, information process unit and information memory cell.
In order to obtain the behavioural information of pedestrian, as shown in figure 4, the behavioural information of pedestrian is acquired by video capture unit, Traffic lights unit acquires the operating condition of current traffic lights, and control execution unit is for carrying out data processing.
Extraction module 20, for extracting current static image information and current light stream image in the walking behavioural information Information.
It should be noted that behavior is made of two aspects of display form and dynamic change, in order to make full use of behavior Two kinds of information, i.e. the variation of the form and behavioral formation of behavior, the Activity recognition method based on 2-stream, in this method Under, video will turn to two kinds of data modes, i.e. red pigment Green Blue image stream and light stream image stream.
Static identification module 30, for the still image Activity recognition model by deep learning to the current static figure As information progress Activity recognition, static behavior recognition result is obtained.
Dynamic Recognition module 40, for the dynamic image Activity recognition model by deep learning to the current light stream figure As information progress Activity recognition, dynamic behaviour recognition result is obtained.
In the present embodiment, by designing two kinds of depth convolutional networks, pre-training is done on ImageNet image data set Afterwards, it is finely adjusted training in the tristimulus image data of video set and light stream image data respectively, this makes a network special The morphological feature of door learning behavior, the dynamic change characterization of the special learning behavior of another network.
Identification module 50 is used for according to the static behavior recognition result and dynamic behaviour recognition result to the walking row Activity recognition is carried out for information.
In the concrete realization, in the behavior of identification, the differentiation of comprehensive two parallel models responds as final identification, By merging the static behavior recognition result and dynamic behaviour recognition result, goal behavior is obtained according to fusion results Recognition result carries out early warning when the goal behavior result belongs to abnormal behaviour information.
It should be noted that the mode of the early warning can also pass through its other party to carry out early warning by way of sound Formula carries out early warning, and the present embodiment with no restriction, in the present embodiment, carries out early warning in a manner of sound early warning to this.
The present embodiment presets section uplink through the above scheme, by acquiring in the preset time range before red light is bright The walking behavioural information of people;Extract the current static image information and current light stream image information in the walking behavioural information; Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, obtains static state Activity recognition result;Behavior is carried out to the current light stream image information by the dynamic image Activity recognition model of deep learning Identification, obtains dynamic behaviour recognition result;According to the static behavior recognition result and dynamic behaviour recognition result to the step Row behavioural information carries out Activity recognition;According to the static behavior recognition result and dynamic behaviour recognition result to the walking row Activity recognition, which is carried out, for information judges whether pedestrian can go out to identify by behavioural information of the deep learning to pedestrian The behavior now made a dash across the red light is prejudged in advance, achievees the purpose that Activity recognition accuracy.
In addition, the embodiment of the present invention also proposes a kind of storage medium, it is stored on the storage medium based on deep learning Activity recognition program, the Activity recognition program based on deep learning is executed by processor as described above based on depth The step of Activity recognition method of study.
It should be noted that, in this document, the terms "include", "comprise" or its any other variant are intended to non-row His property includes, so that the process, method, article or the device that include a series of elements not only include those elements, and And further include other elements that are not explicitly listed, or further include for this process, method, article or device institute it is intrinsic Element.In the absence of more restrictions, the element limited by sentence "including a ...", it is not excluded that including being somebody's turn to do There is also other identical elements in the process, method of element, article or device.
The serial number of the above embodiments of the invention is only for description, does not represent the advantages or disadvantages of the embodiments.
Through the above description of the embodiments, those skilled in the art can be understood that above-described embodiment side Method can be realized by means of software and necessary general hardware platform, naturally it is also possible to by hardware, but in many cases The former is more preferably embodiment.Based on this understanding, technical solution of the present invention substantially in other words does the prior art The part contributed out can be embodied in the form of software products, which is stored in one as described above In computer readable storage medium (such as ROM/RAM, magnetic disk, CD), including some instructions are with so that an intelligent terminal is set Standby (can be mobile phone, computer, terminal device, air conditioner or network-termination device etc.) executes each embodiment of the present invention The method.
The above is only a preferred embodiment of the present invention, is not intended to limit the scope of the invention, all to utilize this hair Equivalent structure or equivalent flow shift made by bright specification and accompanying drawing content is applied directly or indirectly in other relevant skills Art field, is included within the scope of the present invention.

Claims (10)

1. a kind of Activity recognition method based on deep learning, which is characterized in that the Activity recognition side based on deep learning Method includes:
The walking behavioural information of pedestrian on default section is acquired in the preset time range before red light is bright;
Extract the current static image information and current light stream image information in the walking behavioural information;
Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, is obtained Static behavior recognition result;
Activity recognition is carried out to the current light stream image information by the dynamic image Activity recognition model of deep learning, is obtained Dynamic behaviour recognition result;
Activity recognition is carried out to the walking behavioural information according to the static behavior recognition result and dynamic behaviour recognition result.
2. the Activity recognition method based on deep learning as described in claim 1, which is characterized in that described before red light is bright Preset time range in acquire on default section before the walking behavioural information of pedestrian, the method also includes:
Connection is established with traffic-light control device, obtains the current red green of the traffic-light control device transmission in real time upon connection The operation information of lamp;
The behavior that pedestrian on default section is acquired in the preset time range before red light is bright is executed according to the operation information The step of information.
3. the Activity recognition method based on deep learning as described in claim 1, which is characterized in that described to extract the walking Before current static image information and current light stream image information in behavioural information, the method also includes:
Obtain the walking speed information of the pedestrian;
The walking speed information is compared with pre-set velocity threshold value, the exception in the pedestrian is obtained according to comparison result Walking speed information;
The walking behavioural information is adjusted according to the exception walking speed information corresponding goal behavior information;
Correspondingly, the current static image information and current light stream image information extracted in the walking behavioural information, packet It includes:
Extract the current static image information and current light stream image information in walking behavioural information adjusted.
4. the Activity recognition method based on deep learning as claimed in claim 3, which is characterized in that the extraction is adjusted Current static image information and current light stream image information in walking behavioural information, comprising:
The tristimulus image for extracting each frame in walking behavioural information adjusted, the tristimulus image is worked as described in Preceding static image information;
Bias estimation information of each of the walking behavioural information adjusted pixel on time dimension is extracted, it will be described inclined Estimated information is moved as the current light stream image information.
5. the Activity recognition method based on deep learning according to any one of claims 1 to 4, which is characterized in that described Activity recognition is carried out to the current static image information by the still image Activity recognition model of deep learning, obtains static state Before Activity recognition result, the method also includes:
History static image information is obtained, the static nature information in the history static image information is extracted, by the static state Characteristic information generates multidimensional static vector information, and the multidimensional static vector information input is carried out into convolutional neural networks Training, obtains still image Activity recognition model;
The dynamic image Activity recognition model by deep learning carries out Activity recognition to the current light stream image information, Before obtaining dynamic behaviour recognition result, the method also includes:
Historical movement map is obtained as information, extracts the historical movement map as the dynamic feature information in information, by the dynamic Characteristic information generates Dynamic and Multi dimensional vector information, and the Dynamic and Multi dimensional vector information is input in convolutional neural networks and is carried out Training, obtains dynamic image Activity recognition model.
6. the Activity recognition method based on deep learning according to any one of claims 1 to 4, which is characterized in that described After carrying out Activity recognition to the walking behavioural information according to the static behavior recognition result and dynamic behaviour recognition result, The method also includes:
The static behavior recognition result and dynamic behaviour recognition result are merged, and calculate the average value of fusion results;
When the average value reaches default decision threshold, using the corresponding behavior outcome of the fusion results as goal behavior knot Fruit;
Early warning is carried out when the goal behavior result belongs to abnormal behaviour information.
7. the Activity recognition method based on deep learning as claimed in claim 6, which is characterized in that described in the target line After carrying out early warning when belonging to abnormal behaviour information for result, the method also includes:
When the goal behavior result belongs to abnormal behaviour information, the corresponding pedestrian information of the goal behavior result is obtained;
Default abnormal label information is put on to the pedestrian information, and the pedestrian information for putting on default abnormal label information is carried out It saves.
8. a kind of Activity recognition device based on deep learning, which is characterized in that the Activity recognition dress based on deep learning It sets and includes:
Detection module, for acquiring the walking behavior letter of pedestrian on default section in the preset time range before red light is bright Breath;
Extraction module, for extracting current static image information and current light stream image information in the walking behavioural information;
Static identification module, for the still image Activity recognition model by deep learning to the current static image information Activity recognition is carried out, static behavior recognition result is obtained;
Dynamic Recognition module, for the dynamic image Activity recognition model by deep learning to the current light stream image information Activity recognition is carried out, dynamic behaviour recognition result is obtained;
Identification module is used for according to the static behavior recognition result and dynamic behaviour recognition result to the walking behavioural information Carry out Activity recognition.
9. a kind of Activity recognition equipment based on deep learning, which is characterized in that the Activity recognition based on deep learning is set It is standby include: memory, processor and be stored on the memory and can run on the processor based on deep learning Activity recognition program, the Activity recognition program based on deep learning is arranged for carrying out such as any one of claims 1 to 7 The step of described Activity recognition method based on deep learning.
10. a kind of storage medium, which is characterized in that be stored with the Activity recognition journey based on deep learning on the storage medium Sequence is realized as described in any one of claims 1 to 7 when the Activity recognition program based on deep learning is executed by processor The Activity recognition method based on deep learning the step of.
CN201910405370.6A 2019-05-15 2019-05-15 Activity recognition method, apparatus, equipment and storage medium based on deep learning Pending CN110135345A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910405370.6A CN110135345A (en) 2019-05-15 2019-05-15 Activity recognition method, apparatus, equipment and storage medium based on deep learning

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910405370.6A CN110135345A (en) 2019-05-15 2019-05-15 Activity recognition method, apparatus, equipment and storage medium based on deep learning

Publications (1)

Publication Number Publication Date
CN110135345A true CN110135345A (en) 2019-08-16

Family

ID=67574251

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910405370.6A Pending CN110135345A (en) 2019-05-15 2019-05-15 Activity recognition method, apparatus, equipment and storage medium based on deep learning

Country Status (1)

Country Link
CN (1) CN110135345A (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991221A (en) * 2019-10-16 2020-04-10 合肥湛达智能科技有限公司 Dynamic traffic red light running identification method based on deep learning
CN111354024A (en) * 2020-04-10 2020-06-30 深圳市五元科技有限公司 Behavior prediction method for key target, AI server and storage medium
CN111460988A (en) * 2020-03-31 2020-07-28 国网河北省电力有限公司沧州供电分公司 Illegal behavior identification method and device
CN111523361A (en) * 2019-12-26 2020-08-11 中国科学技术大学 Human behavior recognition method
CN116309523A (en) * 2023-04-06 2023-06-23 北京拙河科技有限公司 Dynamic frame image dynamic fuzzy recognition method and device

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203012958U (en) * 2012-12-04 2013-06-19 合肥寰景信息技术有限公司 Road traffic violation behavior analysis early warning system based on motion identification
CN103366565A (en) * 2013-06-21 2013-10-23 浙江理工大学 Method and system of detecting pedestrian running red light based on Kinect
CN105809964A (en) * 2016-05-18 2016-07-27 深圳中兴力维技术有限公司 Traffic warning method and device
CN107705550A (en) * 2017-10-24 2018-02-16 努比亚技术有限公司 Traffic security early warning method of traffic control, mobile terminal and computer-readable recording medium
CN108172025A (en) * 2018-01-30 2018-06-15 东软集团股份有限公司 A kind of auxiliary driving method, device, car-mounted terminal and vehicle
CN108280435A (en) * 2018-01-25 2018-07-13 盛视科技股份有限公司 A kind of passenger's abnormal behaviour recognition methods based on human body attitude estimation
US20180218226A1 (en) * 2016-03-09 2018-08-02 Uber Technologies, Inc. Traffic signal analysis system
CN109376610A (en) * 2018-09-27 2019-02-22 南京邮电大学 Pedestrian's unsafe acts detection method in video monitoring based on image concept network
CN109558805A (en) * 2018-11-06 2019-04-02 南京邮电大学 Human bodys' response method based on multilayer depth characteristic
CN109753897A (en) * 2018-12-21 2019-05-14 西北工业大学 Based on memory unit reinforcing-time-series dynamics study Activity recognition method

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN203012958U (en) * 2012-12-04 2013-06-19 合肥寰景信息技术有限公司 Road traffic violation behavior analysis early warning system based on motion identification
CN103366565A (en) * 2013-06-21 2013-10-23 浙江理工大学 Method and system of detecting pedestrian running red light based on Kinect
US20180218226A1 (en) * 2016-03-09 2018-08-02 Uber Technologies, Inc. Traffic signal analysis system
CN105809964A (en) * 2016-05-18 2016-07-27 深圳中兴力维技术有限公司 Traffic warning method and device
CN107705550A (en) * 2017-10-24 2018-02-16 努比亚技术有限公司 Traffic security early warning method of traffic control, mobile terminal and computer-readable recording medium
CN108280435A (en) * 2018-01-25 2018-07-13 盛视科技股份有限公司 A kind of passenger's abnormal behaviour recognition methods based on human body attitude estimation
CN108172025A (en) * 2018-01-30 2018-06-15 东软集团股份有限公司 A kind of auxiliary driving method, device, car-mounted terminal and vehicle
CN109376610A (en) * 2018-09-27 2019-02-22 南京邮电大学 Pedestrian's unsafe acts detection method in video monitoring based on image concept network
CN109558805A (en) * 2018-11-06 2019-04-02 南京邮电大学 Human bodys' response method based on multilayer depth characteristic
CN109753897A (en) * 2018-12-21 2019-05-14 西北工业大学 Based on memory unit reinforcing-time-series dynamics study Activity recognition method

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
KAREN SIMONYAN等: ""Two-Stream Convolutional Networks for Action Recognition in Videos", 《ADVANCES IN NEURAL INFORMATION PROCESSING SYSTEMS(NIPS 2014)》 *
杨林川: "基于深度神经网络的司机行为识别技术研究与实现", 《中国优秀硕士学位论文全文数据库信息科技辑》 *

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110991221A (en) * 2019-10-16 2020-04-10 合肥湛达智能科技有限公司 Dynamic traffic red light running identification method based on deep learning
CN110991221B (en) * 2019-10-16 2024-02-27 合肥湛达智能科技有限公司 Dynamic traffic red light running recognition method based on deep learning
CN111523361A (en) * 2019-12-26 2020-08-11 中国科学技术大学 Human behavior recognition method
CN111523361B (en) * 2019-12-26 2022-09-06 中国科学技术大学 Human behavior recognition method
CN111460988A (en) * 2020-03-31 2020-07-28 国网河北省电力有限公司沧州供电分公司 Illegal behavior identification method and device
CN111460988B (en) * 2020-03-31 2023-08-22 国网河北省电力有限公司沧州供电分公司 Illegal behavior recognition method and device
CN111354024A (en) * 2020-04-10 2020-06-30 深圳市五元科技有限公司 Behavior prediction method for key target, AI server and storage medium
CN111354024B (en) * 2020-04-10 2023-04-21 深圳市五元科技有限公司 Behavior prediction method of key target, AI server and storage medium
CN116309523A (en) * 2023-04-06 2023-06-23 北京拙河科技有限公司 Dynamic frame image dynamic fuzzy recognition method and device

Similar Documents

Publication Publication Date Title
CN110135345A (en) Activity recognition method, apparatus, equipment and storage medium based on deep learning
CN110390262B (en) Video analysis method, device, server and storage medium
CN106778583B (en) Vehicle attribute identification method and device based on convolutional neural network
CN111444848A (en) Specific scene model upgrading method and system based on federal learning
CN111274881A (en) Driving safety monitoring method and device, computer equipment and storage medium
CN104616021B (en) Traffic sign image processing method and device
CN106485927A (en) A kind of intelligent transportation violation information harvester and acquisition method
CN110619277A (en) Multi-community intelligent deployment and control method and system
CN111047874B (en) Intelligent traffic violation management method and related product
CN110096975B (en) Parking space state identification method, equipment and system
CN110533950A (en) Detection method, device, electronic equipment and the storage medium of parking stall behaviour in service
CN106971544B (en) A kind of direct method that vehicle congestion is detected using still image
CN106682601A (en) Driver violation conversation detection method based on multidimensional information characteristic fusion
CN106815574A (en) Set up detection model, detect the method and apparatus for taking mobile phone behavior
CN111160175A (en) Intelligent pedestrian violation behavior management method and related product
KR102174556B1 (en) Apparatus for monitoring image to control traffic information employing Artificial Intelligence and vehicle number
CN105243701A (en) Driving information reporting method and driving recording terminal
CN107730972A (en) The method and apparatus that video identification controls banister
CN112651293B (en) Video detection method for road illegal spreading event
CN112818839A (en) Method, device, equipment and medium for identifying violation behaviors of driver
CN114926791A (en) Method and device for detecting abnormal lane change of vehicles at intersection, storage medium and electronic equipment
CN110111577A (en) Non-motor vehicle recognition methods, device, equipment and storage medium based on big data
CN110674887A (en) End-to-end road congestion detection algorithm based on video classification
CN116071711B (en) Traffic jam condition detection method and device
CN113076852A (en) Vehicle-mounted snapshot processing system occupying bus lane based on 5G communication

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication

Application publication date: 20190816

RJ01 Rejection of invention patent application after publication