WO2021012564A1 - 视频处理方法及装置、电子设备和存储介质 - Google Patents
视频处理方法及装置、电子设备和存储介质 Download PDFInfo
- Publication number
- WO2021012564A1 WO2021012564A1 PCT/CN2019/121975 CN2019121975W WO2021012564A1 WO 2021012564 A1 WO2021012564 A1 WO 2021012564A1 CN 2019121975 W CN2019121975 W CN 2019121975W WO 2021012564 A1 WO2021012564 A1 WO 2021012564A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- feature information
- feature
- action recognition
- target video
- processing
- Prior art date
Links
- 238000003672 processing method Methods 0.000 title claims abstract description 37
- 238000003860 storage Methods 0.000 title claims abstract description 30
- 230000009471 action Effects 0.000 claims abstract description 305
- 238000012545 processing Methods 0.000 claims abstract description 217
- 238000000605 extraction Methods 0.000 claims abstract description 106
- 238000000034 method Methods 0.000 claims abstract description 96
- 239000013598 vector Substances 0.000 claims description 71
- 238000013528 artificial neural network Methods 0.000 claims description 56
- 230000008569 process Effects 0.000 claims description 46
- 238000012549 training Methods 0.000 claims description 20
- 238000004590 computer program Methods 0.000 claims description 19
- 230000009467 reduction Effects 0.000 claims description 10
- 238000002372 labelling Methods 0.000 claims description 3
- 238000010586 diagram Methods 0.000 description 26
- 239000011159 matrix material Substances 0.000 description 17
- 230000006870 function Effects 0.000 description 12
- 238000004364 calculation method Methods 0.000 description 11
- 238000004891 communication Methods 0.000 description 10
- 238000005516 engineering process Methods 0.000 description 8
- 230000003287 optical effect Effects 0.000 description 8
- 238000005070 sampling Methods 0.000 description 6
- 230000005540 biological transmission Effects 0.000 description 5
- 230000005236 sound signal Effects 0.000 description 4
- 238000012544 monitoring process Methods 0.000 description 3
- 230000002123 temporal effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000001133 acceleration Effects 0.000 description 2
- 230000006399 behavior Effects 0.000 description 2
- 230000008859 change Effects 0.000 description 2
- 239000011521 glass Substances 0.000 description 2
- 230000003993 interaction Effects 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 230000001902 propagating effect Effects 0.000 description 2
- 238000000926 separation method Methods 0.000 description 2
- 230000003068 static effect Effects 0.000 description 2
- RYGMFSIKBFXOCR-UHFFFAOYSA-N Copper Chemical compound [Cu] RYGMFSIKBFXOCR-UHFFFAOYSA-N 0.000 description 1
- 241000190070 Sarracenia purpurea Species 0.000 description 1
- 230000004913 activation Effects 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 229910052802 copper Inorganic materials 0.000 description 1
- 239000010949 copper Substances 0.000 description 1
- 238000009826 distribution Methods 0.000 description 1
- 238000011156 evaluation Methods 0.000 description 1
- 239000000835 fiber Substances 0.000 description 1
- 238000011478 gradient descent method Methods 0.000 description 1
- 238000003384 imaging method Methods 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 239000004973 liquid crystal related substance Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 239000013307 optical fiber Substances 0.000 description 1
- 230000002441 reversible effect Effects 0.000 description 1
- 239000004065 semiconductor Substances 0.000 description 1
- 230000001052 transient effect Effects 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
- G06V20/42—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items of sport video content
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/41—Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/21—Design or setup of recognition systems or techniques; Extraction of features in feature space; Blind source separation
- G06F18/214—Generating training patterns; Bootstrap methods, e.g. bagging or boosting
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06F—ELECTRIC DIGITAL DATA PROCESSING
- G06F18/00—Pattern recognition
- G06F18/20—Analysing
- G06F18/24—Classification techniques
- G06F18/241—Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/764—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/77—Processing image or video features in feature spaces; using data integration or data reduction, e.g. principal component analysis [PCA] or independent component analysis [ICA] or self-organising maps [SOM]; Blind source separation
- G06V10/7715—Feature extraction, e.g. by transforming the feature space, e.g. multi-dimensional scaling [MDS]; Mappings, e.g. subspace methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/70—Arrangements for image or video recognition or understanding using pattern recognition or machine learning
- G06V10/82—Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/46—Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/40—Scenes; Scene-specific elements in video content
- G06V20/49—Segmenting video sequences, i.e. computational techniques such as parsing or cutting the sequence, low-level clustering or determining units such as shots or scenes
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Definitions
- the present disclosure relates to the field of computer vision technology, and in particular to a video processing method and device, electronic equipment, and storage medium.
- Video is composed of multiple video frames, which can record information such as actions and behaviors, and the application scenarios are diversified.
- video not only has a large number of frames and a large amount of processing operations, but also has an association relationship with time.
- the content in multiple video frames and the time corresponding to each video frame are used to express information such as actions or behaviors.
- spatio-temporal features and motion features can be obtained through processing such as optical flow or 3D convolution.
- the present disclosure proposes a video processing method and device, electronic equipment and storage medium.
- a video processing method which includes: performing feature extraction on multiple target video frames of a video to be processed through a feature extraction network to obtain feature maps of the multiple target video frames; and performing M-level actions
- the recognition network performs action recognition processing on the feature maps of the multiple target video frames to obtain the action recognition features of the multiple target video frames, where M is an integer greater than or equal to 1, and the action recognition processing includes The spatiotemporal feature extraction process of the feature maps of the multiple target video frames, and the motion feature extraction process based on the motion difference information between the feature maps of the multiple target video frames, the motion recognition feature includes spatiotemporal feature information and motion Feature information; determining the classification result of the video to be processed according to the action recognition features of the multiple target video frames.
- the action recognition feature of the target video frame can be obtained through a multi-level action recognition network, and then the classification result of the video to be processed can be obtained, without the need for optical flow or 3D convolution for action recognition, The amount of calculation is reduced, the processing efficiency is improved, the video to be processed can be classified in real time online, and the practicability of the video processing method is improved.
- performing action recognition on the feature maps of the multiple target video frames through an M-level action recognition network to obtain the action recognition features of the multiple target video frames includes: The recognition network processes the feature maps of the multiple target video frames to obtain the first-level action recognition features; the i-th level action recognition network processes the i-1th-level action recognition features to obtain the i-th level Action recognition feature, i is an integer and 1 ⁇ i ⁇ M, where the action recognition features of each level correspond to the feature maps of the multiple target video frames; the M-1 level action recognition network The action recognition feature is processed to obtain the action recognition feature of the multiple target video frames.
- the action recognition feature of the i-1 level is processed through the i-th level action recognition network to obtain the action recognition feature of the i-th level, including: the action of the i-1 level Perform a first convolution process on the recognition feature to obtain first feature information, where the first feature information corresponds to feature maps of the multiple target video frames, respectively; performing spatiotemporal feature extraction processing on the first feature information, Obtain spatiotemporal feature information; perform motion feature extraction processing on the first feature information to obtain motion feature information; obtain the i-th level action recognition feature at least according to the spatiotemporal feature information and the motion feature information.
- obtaining the i-th level action recognition feature based on at least the spatiotemporal feature information and the motion feature information includes: according to the spatiotemporal feature information, the motion feature information, and the motion feature information.
- the action recognition feature of the i-1 level is used to obtain the action recognition feature of the i level.
- performing spatiotemporal feature extraction processing on the first feature information to obtain spatiotemporal feature information includes: separately dimensioning the first feature information corresponding to the feature maps of the multiple target video frames Reconstruction processing to obtain second feature information, the second feature information has a different dimension from the first feature information; the second convolution processing is performed on each channel of the second feature information to obtain third feature information , Wherein the third feature information represents the temporal features of the feature maps of the multiple target video frames; performing dimensional reconstruction processing on the third feature information to obtain fourth feature information, and the fourth feature information is the same as The dimensions of the first feature information are the same; spatial feature extraction processing is performed on the fourth feature information to obtain the spatiotemporal feature information.
- the first feature information includes multiple row vectors or column vectors
- dimensional reconstruction processing is performed on the first feature information corresponding to the feature maps of the multiple target video frames respectively, including : Perform splicing processing on multiple row vectors or column vectors of the first feature information to obtain the second feature information, where the second feature information includes a row vector or a column vector.
- the spatiotemporal information of each channel can be obtained, making the spatiotemporal information complete, and the dimension of the first feature information can be changed through reconstruction processing, and convolution processing can be performed in a less computationally expensive manner, for example, by 1D convolution
- the processing method performs the second convolution processing, which can simplify the calculation and improve the processing efficiency.
- performing motion feature extraction processing on the first feature information to obtain motion feature information includes: performing dimensionality reduction processing on a channel of the first feature information to obtain fifth feature information, where , The fifth feature information is respectively corresponding to each target video frame in the video to be processed; the fifth feature information corresponding to the k+1th target video frame is subjected to the third convolution processing, and is combined with the first The fifth feature information corresponding to k target video frames is subtracted to obtain the sixth feature information corresponding to the k-th target video frame, where k is an integer and 1 ⁇ k ⁇ T, and T is the number of target video frames, and T is an integer greater than 1, and the sixth characteristic information represents the difference between the fifth characteristic information corresponding to the k+1th target video frame and the fifth characteristic information corresponding to the kth target video frame Motion difference information; performing feature extraction processing on the sixth feature information corresponding to each target video frame to obtain the motion feature information.
- the motion feature information can be obtained by subtracting the fifth feature information from the previous fifth feature information after performing the third convolution processing on the fifth feature information, which can simplify calculations and improve processing efficiency.
- obtaining the action recognition feature of the i-th level according to the spatiotemporal feature information, the motion feature information, and the action recognition feature of the i-1th level includes: The spatio-temporal feature information and the motion feature information are summed to obtain seventh feature information; the seventh feature information is subjected to fourth convolution processing, and summed with the i-1th level action recognition feature Processing to obtain the i-th level action recognition feature.
- determining the classification result of the video to be processed according to the action recognition features of the multiple target video frames includes: performing full connection processing on the action recognition features of each target video frame to obtain Classification information of each target video frame; performing average processing on the classification information of each target video frame to obtain the classification result of the to-be-processed video.
- the method further includes: determining multiple target video frames from the video to be processed.
- determining multiple target video frames from multiple video frames of the to-be-processed video includes: dividing the to-be-processed video into multiple video segments; randomly determining from each video segment At least one target video frame to obtain multiple target video frames.
- the target video frame can be determined from multiple video frames of the video to be processed, and then the target video frame can be processed, which can save computing resources and improve processing efficiency.
- the video processing method is implemented by a neural network
- the neural network includes at least the feature extraction network, the M-level action recognition network, and the method further includes:
- the category label of the sample video is used to train the neural network.
- training the neural network through the sample video and the category annotations of the sample video includes: determining a plurality of sample video frames from the sample video; using the neural network Process the sample video frame to determine the classification result of the sample video; determine the network loss of the neural network according to the classification result and category label of the sample video; adjust the neural network loss according to the network loss Network parameters.
- a video processing device including: a feature extraction module, configured to perform feature extraction on multiple target video frames of a video to be processed through a feature extraction network to obtain Feature map; action recognition module for performing action recognition processing on the feature maps of the multiple target video frames through an M-level action recognition network to obtain the action recognition features of the multiple target video frames, where M is greater than or An integer equal to 1, the action recognition processing includes spatiotemporal feature extraction processing based on feature maps of the multiple target video frames, and motion feature extraction based on motion difference information between feature maps of the multiple target video frames Processing, the action recognition feature includes spatio-temporal feature information and motion feature information; the classification module is configured to determine the classification result of the video to be processed according to the action recognition features of the multiple target video frames.
- the action recognition module is further configured to: process the feature maps of the multiple target video frames through the first-level action recognition network to obtain the first-level action recognition features;
- the i-th action recognition network processes the action recognition features of the i-1 level to obtain the action recognition features of the i-th level, where i is an integer and 1 ⁇ i ⁇ M, where the action recognition features of each level are respectively the same as those described above
- the feature maps of multiple target video frames correspond; the M-1 level action recognition features are processed through the M level action recognition network to obtain the action recognition features of the multiple target video frames.
- the action recognition module is further configured to: perform first convolution processing on the i-1th level action recognition feature to obtain first feature information, where the first The feature information respectively corresponds to the feature maps of the multiple target video frames; performing spatiotemporal feature extraction processing on the first feature information to obtain spatiotemporal feature information; performing motion feature extraction processing on the first feature information to obtain motion features Information; at least according to the spatio-temporal feature information and the motion feature information, obtain the i-th level of action recognition features.
- the action recognition module is further configured to: obtain the i-th level according to the spatio-temporal feature information, the motion feature information, and the i-1th level action recognition feature Action recognition features.
- the action recognition module is further configured to: perform dimensional reconstruction processing on the first feature information corresponding to the feature maps of the multiple target video frames to obtain the second feature information, The second feature information and the first feature information have different dimensions; the second convolution processing is performed on each channel of the second feature information to obtain the third feature information, where the third feature information represents Time features of feature maps of the multiple target video frames; performing dimensional reconstruction processing on the third feature information to obtain fourth feature information, where the fourth feature information has the same dimension as the first feature information; Perform spatial feature extraction processing on the fourth feature information to obtain the spatiotemporal feature information.
- the first feature information includes a plurality of row vectors or column vectors
- the action recognition module is further configured to perform processing on the plurality of row vectors or column vectors of the first feature information.
- the splicing process obtains the second characteristic information, where the second characteristic information includes a row vector or a column vector.
- the action recognition module is further configured to: perform dimensionality reduction processing on the channel of the first feature information to obtain fifth feature information, wherein the fifth feature information is different from the Each target video frame in the to-be-processed video corresponds; the fifth feature information corresponding to the k+1th target video frame is subjected to third convolution processing, and the fifth feature corresponding to the kth target video frame The information is subtracted to obtain the sixth feature information corresponding to the kth target video frame, where k is an integer and 1 ⁇ k ⁇ T, T is the number of target video frames, and T is an integer greater than 1, the The sixth feature information represents the motion difference information between the fifth feature information corresponding to the k+1th target video frame and the fifth feature information corresponding to the kth target video frame; The sixth feature information corresponding to the video frame is subjected to feature extraction processing to obtain the motion feature information.
- the action recognition module is further configured to: perform summation processing on the spatiotemporal feature information and the motion feature information to obtain seventh feature information;
- the fourth convolution processing is performed, and the sum processing is performed with the action recognition feature of the i-1th level to obtain the action recognition feature of the i-th level.
- the classification module is further configured to: perform full connection processing on the action recognition features of each target video frame to obtain classification information of each target video frame; and perform classification information of each target video frame Perform averaging processing to obtain the classification result of the to-be-processed video.
- the device further includes: a determining module, configured to determine multiple target video frames from the video to be processed.
- the determining module is further configured to: divide the to-be-processed video into multiple video segments; randomly determine at least one target video frame from each video segment, and obtain multiple target videos frame.
- the video processing method is implemented by a neural network
- the neural network includes at least the feature extraction network and the M-level action recognition network.
- the device further includes: a training module for Training the neural network through the sample video and the category labeling of the sample video.
- the training module is further configured to: determine multiple sample video frames from the sample video; process the sample video frames through the neural network to determine the sample The classification result of the video; the network loss of the neural network is determined according to the classification result and the category label of the sample video; the network parameter of the neural network is adjusted according to the network loss.
- an electronic device including:
- a processor a memory for storing executable instructions of the processor; wherein the processor is configured to execute the above-mentioned video processing method.
- a computer-readable storage medium having computer program instructions stored thereon, and when the computer program instructions are executed by a processor, the foregoing video processing method is implemented.
- a computer program including computer-readable code, when the computer-readable code is executed in an electronic device, a processor in the electronic device executes the above-mentioned video processing method.
- Fig. 1 shows a flowchart of a video processing method according to an embodiment of the present disclosure
- Fig. 2 shows a flowchart of a video processing method according to an embodiment of the present disclosure
- Figure 3 shows a schematic diagram of an action recognition network according to an embodiment of the present disclosure
- FIG. 4 shows a schematic diagram of spatiotemporal feature extraction processing according to an embodiment of the present disclosure
- Fig. 5 shows a schematic diagram of motion feature extraction processing according to an embodiment of the present disclosure
- Fig. 6 shows a flowchart of a video processing method according to an embodiment of the present disclosure
- Fig. 7 shows an application schematic diagram of a video processing method according to an embodiment of the present disclosure
- Fig. 8 shows a block diagram of a video processing device according to an embodiment of the present disclosure
- Fig. 9 shows a block diagram of a video processing device according to an embodiment of the present disclosure.
- Figure 10 shows a block diagram of an electronic device according to an embodiment of the present disclosure
- FIG. 11 shows a block diagram of an electronic device according to an embodiment of the present disclosure.
- Fig. 1 shows a flowchart of a video processing method according to an embodiment of the present disclosure. As shown in Fig. 1, the method includes:
- step S11 feature extraction is performed on multiple target video frames of the video to be processed through a feature extraction network to obtain feature maps of the multiple target video frames;
- step S12 perform action recognition processing on the feature maps of the multiple target video frames through an M-level action recognition network to obtain the action recognition features of the multiple target video frames, where M is an integer greater than or equal to 1.
- the action recognition processing includes spatiotemporal feature extraction processing based on feature maps of the multiple target video frames, and motion feature extraction processing based on motion difference information between feature maps of the multiple target video frames.
- Action recognition features include spatiotemporal feature information and motion feature information;
- step S13 the classification result of the to-be-processed video is determined according to the action recognition features of the multiple target video frames.
- the action recognition feature of the target video frame can be obtained through a multi-level action recognition network, and then the classification result of the video to be processed can be obtained, without the need for optical flow or 3D convolution for action recognition, The amount of calculation is reduced, the processing efficiency is improved, the video to be processed can be classified in real time online, and the practicability of the video processing method is improved.
- the method may be executed by a terminal device, which may be User Equipment (UE), mobile equipment, user terminal, terminal, cellular phone, cordless phone, personal digital processing (Personal Digital Processing) Digital Assistant (PDA), handheld devices, computing devices, vehicle-mounted devices, wearable devices, etc.
- UE User Equipment
- PDA Personal Digital Processing
- the method can be implemented by a processor calling computer-readable instructions stored in a memory.
- the method is executed by a server.
- the to-be-processed video may be a video shot by any video acquisition device, and the to-be-processed video frame may include one or more target objects (for example, objects such as people, vehicles, and/or teacups). ), the target object may be performing a certain action (for example, picking up a water cup, walking, etc.), the present disclosure does not limit the content of the processed video.
- target objects for example, objects such as people, vehicles, and/or teacups.
- the target object may be performing a certain action (for example, picking up a water cup, walking, etc.)
- the present disclosure does not limit the content of the processed video.
- Fig. 2 shows a flowchart of a video processing method according to an embodiment of the present disclosure. As shown in Fig. 2, the method includes:
- step S14 multiple target video frames are determined from the video to be processed.
- step S14 may include: dividing the to-be-processed video into multiple video segments; randomly determining at least one target video frame from each video segment, and obtaining multiple target video frames.
- the video to be processed can include multiple video frames, and the video to be processed can be divided, for example, it can be divided into T video segments (T is an integer greater than 1), and can be divided into multiple Sampling is performed in the video frames, for example, at least one target video frame is sampled in each video segment.
- the video to be processed can be divided into equal intervals, such as 8 or 16 segments, and random sampling is performed in each video segment.
- one video frame can be randomly selected as the target video in each video segment Frame, you can get multiple target video frames.
- random sampling can be performed on all video frames of the video to be processed to obtain multiple target video frames. Or, you can select multiple video frames at equal intervals as the target video frame, for example, select the first video frame, the 11th video frame, the 21st video frame... Or, you can select all the video frames of the video to be processed Both are determined as the target video frame, and the present disclosure does not limit the method of selecting the target video frame.
- the target video frame can be determined from multiple video frames of the video to be processed, and then the target video frame can be processed, which can save computing resources and improve processing efficiency.
- step S11 feature extraction may be performed on multiple target video frames of the video to be processed to obtain feature maps of the multiple target video frames.
- the feature extraction process can be performed by a feature extraction network of a neural network, and the feature extraction network can be a part of the neural network (for example, a sub-network or a neural network of a certain level).
- the feature The extraction network can include one or more convolutional layers, and can perform feature extraction on multiple target video frames to obtain feature maps of multiple target video frames.
- feature extraction can be performed on T (T is an integer greater than 1) target video frames through the feature extraction network, and each target video frame can be divided into C (C is a positive integer) channels and input to the feature extraction network
- T is an integer greater than 1
- C is a positive integer
- the feature extraction network can be input through three channels of R, G, and B.
- the size of each target video frame is H ⁇ W (H is the height of the image, which can be expressed as the number of pixels in the height direction of the image, W is the width of the image, which can be expressed as the number of pixels in the width direction of the image) Therefore, the dimension of the target video frame of the input feature extraction network is T ⁇ C ⁇ H ⁇ W.
- T can be 16, C can be 3, H and W can both be 224, and the dimension of the target video frame input to the feature extraction network is 16 ⁇ 3 ⁇ 224 ⁇ 224.
- the neural network can perform batch processing on multiple to-be-processed videos.
- the feature extraction network can perform feature extraction processing on the target video frames of N to-be-processed videos, and input the target of the target video frame of the feature extraction network
- the dimensions of the video frame are N ⁇ T ⁇ C ⁇ H ⁇ W.
- the feature extraction network may perform feature extraction processing on target video frames with dimensions of T ⁇ C ⁇ H ⁇ W to obtain T groups of feature maps corresponding to the T target video frames.
- the feature map size of the target video frame can be smaller than the target video frame, but the number of channels can be more than the target video frame, which can increase the receptive field of the target video frame, that is, the value of C can be increased , The values of H and W can be reduced.
- the dimension of the target video frame of the input feature extraction network is 16 ⁇ 3 ⁇ 224 ⁇ 224
- the number of channels of the target video frame can be increased by 16 times, that is, the value of C can be increased to 48
- the feature map size of the target video frame It can be reduced by 4 times, that is, the values of H and W can be reduced to 56
- the number of channels of the feature map corresponding to each target video frame is 48
- the size of each feature map is 56 ⁇ 56
- the dimension of the feature map can be 16 ⁇ 48 ⁇ 56 ⁇ 56.
- step S12 action recognition may be performed on the feature maps of T target video frames, and the action recognition features of each target video frame can be obtained respectively.
- the feature maps of the multiple target video frames can be subjected to action recognition processing through the M-level action recognition network of the neural network.
- the M-level action recognition network can be a cascaded M action recognition network, and each action recognition network can Is part of the neural network.
- step S12 may include: processing feature maps of the multiple target video frames through the first-level action recognition network to obtain the first-level action recognition features; and through the i-th level action recognition The network processes the action recognition feature of level i-1 to obtain the action recognition feature of level i, where i is an integer and 1 ⁇ i ⁇ M, where the action recognition features of each level are respectively related to the multiple target video frames Corresponding to the feature map; the M-1 level action recognition feature is processed through the M level action recognition network to obtain the action recognition features of the multiple target video frames.
- the M-level action recognition network is cascaded, and the output information of each level of action recognition network (that is, the action recognition characteristics of the action recognition network at this level) can be used as the next level of action recognition network Enter information.
- the first-level action recognition network can process the feature map of the target video frame, and output the first-level action recognition features.
- the first-level action recognition features can be used as the access information of the second-level action recognition features, that is, the second
- the first-level action recognition network can process the first-level action recognition features to obtain the second-level action recognition features, and can use the second-level action recognition features as the input information of the third-level action recognition network...
- the i-th level action recognition network can process the i-1th level action recognition features as input information, and the i-th level action recognition network Processing the action recognition feature at level i-1 to obtain the action recognition feature at level i includes: performing first convolution processing on the action recognition feature at level i-1 to obtain first feature information; The first feature information is subjected to spatiotemporal feature extraction processing to obtain spatiotemporal feature information; the first feature information is subjected to motion feature extraction processing to obtain motion feature information; at least according to the spatiotemporal feature information and the motion feature information, all The action recognition feature of the i-th level.
- FIG. 3 shows a schematic diagram of an action recognition network according to an embodiment of the present disclosure.
- the structures of the first-level action recognition network to the Mth-level action recognition network are all shown in FIG. 3.
- the i-th level action recognition network can process the i-1th level action recognition features as input information.
- the i-th action recognition network can perform the first convolution processing on the action recognition features of the i-1th stage through a 2D convolution layer with a convolution kernel of 1 ⁇ 1, and can perform the first convolution processing on the actions of the i-1th stage Recognize features for dimensionality reduction.
- the convolution kernel is a 1 ⁇ 1 2D convolutional layer, which can reduce the number of channels of the i-1th level of action recognition features.
- the number of channels C can be reduced by 16 Times, the first characteristic information is obtained.
- the present disclosure does not limit the reduction factor.
- the first-level action recognition network can process the feature map of the target video frame as input information.
- the first-level action recognition network can perform first convolution processing on the feature map of the target video frame through a 2D convolution layer with a convolution kernel of 1 ⁇ 1, and can reduce the dimension of the feature map to obtain first feature information.
- the i-th level action recognition network can perform spatio-temporal feature extraction processing and motion feature extraction processing on the first feature information respectively, and can pass the first feature information through two branches (spatial-temporal feature extraction branch). And motion feature extraction branch) to process separately to obtain spatiotemporal feature information and motion feature information.
- obtaining the action recognition feature of the i-th level according to the spatiotemporal feature information, the motion feature information, and the action recognition feature of the i-1th level may include: according to the spatiotemporal feature information, the motion The feature information and the action recognition feature of the i-1th level are used to obtain the action recognition feature of the i-th level.
- the spatio-temporal feature information and the motion feature information can be summed, and the sum result can be convolved.
- the convolution process result can be summed with the action recognition feature of the i-1th level to obtain the first i-level action recognition features.
- FIG. 4 shows a schematic diagram of spatiotemporal feature extraction processing according to an embodiment of the present disclosure.
- Performing spatiotemporal feature extraction processing on the first feature information to obtain spatiotemporal feature information includes: separately comparing the feature maps of the multiple target video frames The corresponding first feature information is subjected to dimensional reconstruction processing to obtain second feature information.
- the second feature information has a different dimension from the first feature information; the second volume is performed on each channel of the second feature information.
- the fourth feature information has the same dimension as the first feature information; spatial feature extraction processing is performed on the fourth feature information to obtain the spatiotemporal feature information.
- the dimension of the first feature information is T ⁇ C ⁇ H ⁇ W, where the values of the parameters C, H, and W may be different from the feature map of the target video frame
- the first feature Information can be represented by a feature matrix, which can be represented as multiple row vectors or column vectors.
- the first feature information includes a plurality of row vectors or column vectors, respectively performing dimensional reconstruction processing on the first feature information corresponding to the feature maps of the multiple target video frames, including: A plurality of row vectors or column vectors are spliced to obtain the second characteristic information, where the second characteristic information includes a row vector or a column vector.
- the first feature information can be reconstructed, and the dimension of the feature matrix can be transformed into HW ⁇ C ⁇ T to obtain second feature information that is different in dimension from the first feature information, for example, first feature information Including T groups of feature matrices, the number of channels in each feature matrix is C (for example, the number of feature matrices in each group is C), and the size of each feature matrix is H ⁇ W.
- Each feature matrix can be spliced separately, for example ,
- the feature matrix can be regarded as H row vectors or W column vectors, and the H row vectors or W column vectors are spliced to form a row vector or a column vector.
- the row vector or column vector is all In the second feature information, the value of HW may be equal to the product of H and W.
- the present disclosure does not limit the way of reconstruction processing.
- the second convolution processing may be performed on each channel of the second characteristic information to obtain the third characteristic information.
- the second convolution processing can be performed on each channel of the second feature information through a 1D depth separation convolution layer with a convolution kernel of 3 ⁇ 1.
- the second feature information of the T group includes C channels.
- the second convolution processing can be performed on the C second feature information of each group to obtain the first feature information of the T group.
- Three feature information, T groups of third feature information may indicate the time features of the feature maps of the multiple target video frames, that is, the third feature information has time information of each target video frame.
- the spatiotemporal information contained in the second feature information of each channel may be different from each other, and the second convolution processing is performed on the second feature information of each channel separately to obtain the third feature information of each channel, and The second convolution process is performed on the reconstructed second feature information for each channel through the 1D convolution layer with the convolution kernel of 3 ⁇ 1.
- the amount of calculation is small, that is, the row vector or column vector is 1D Convolution processing requires less computation than performing 2D convolution or 3D convolution on the feature map, which can improve processing efficiency.
- the dimension of the third feature information is HW ⁇ C ⁇ T, that is, each third feature information may be a row vector or a column vector.
- the third feature information can be reconstructed.
- each third feature information in the form of row vector or column vector
- each third feature information can be reconstructed into a matrix to obtain fourth feature information.
- the dimension of the feature information is the same as that of the first feature information.
- each third feature information is a row vector or column vector of length HW
- the third feature information can be divided into W column vectors of length H or H row vectors of length W are combined, and the row vectors or column vectors are combined to obtain a feature matrix (ie, fourth feature information), and the dimension of the fourth feature information is T ⁇ C ⁇ H ⁇ W.
- the present disclosure does not limit the parameters of the fourth feature information.
- the fourth feature information can be convolved through a 2D convolution layer with a convolution kernel of 3 ⁇ 3, and the spatial features of the fourth feature information can be extracted to obtain spatiotemporal feature information, that is, The feature information representing the position of the target object in the fourth feature information is extracted and fused with the time information to represent the spatiotemporal feature information.
- the spatio-temporal characteristic information may be a characteristic matrix with a dimension of T ⁇ C ⁇ H ⁇ W, and H and W of the spatio-temporal characteristic information may be different from the fourth characteristic information.
- the spatiotemporal information of each channel can be obtained, making the spatiotemporal information complete, and the dimension of the first feature information can be changed through reconstruction processing, and convolution processing can be performed in a less computationally expensive manner, for example, by 1D convolution
- the processing method performs the second convolution processing, which can simplify the calculation and improve the processing efficiency.
- FIG. 5 shows a schematic diagram of motion feature extraction processing according to an embodiment of the present disclosure.
- Performing motion feature extraction processing on the first feature information to obtain motion feature information may include: performing dimensionality reduction on the channel of the first feature information Processing to obtain fifth feature information, where the fifth feature information corresponds to each target video frame in the video to be processed; the fifth feature information corresponding to the k+1th target video frame is subjected to the third Convolution processing, and subtracting the fifth feature information corresponding to the kth target video frame to obtain the sixth feature information corresponding to the kth target video frame, where k is an integer and 1 ⁇ k ⁇ T, T is the number of target video frames, and T is an integer greater than 1, the sixth characteristic information represents fifth characteristic information corresponding to the k+1th target video frame and the kth target video frame
- the motion difference information between the corresponding fifth feature information; the sixth feature information corresponding to each target video frame is subjected to feature extraction processing to obtain the motion feature information.
- the channel of the first feature information can be reduced in dimensionality to obtain the fifth feature information.
- the first feature information can be obtained through a 2D convolution layer with a 1 ⁇ 1 convolution kernel.
- Channels are processed for dimensionality reduction, that is, the number of channels can be reduced.
- the number of channels C of the first feature information whose dimensions are T ⁇ C ⁇ H ⁇ W can be reduced to C/16.
- the dimension of the fifth feature information is T ⁇ C/16 ⁇ H ⁇ W, that is, it includes T groups of fifth feature information corresponding to T target video frames, respectively ,
- the dimension of each group of fifth feature information is C/16 ⁇ H ⁇ W.
- the fifth feature information corresponding to the k+1th target video frame may be The third convolution processing for each channel performed on the feature information (referred to as the fifth feature information k+1 for short), for example, the fifth feature information k+1 can be performed on the fifth feature information k+1 through a 2D depth separation convolution layer with a convolution kernel of 3 ⁇ 3 The third convolution process, and the result obtained by the third convolution process is subtracted from the fifth feature information k to obtain the sixth feature information corresponding to the k-th target video frame.
- the dimensions of the sixth feature information are the same as the fifth feature
- the same information is C/16 ⁇ H ⁇ W.
- the third convolution processing can be performed on each fifth feature information separately, and the fifth feature information can be subtracted from the previous fifth feature information to obtain the sixth feature information.
- the sixth feature information can represent the first corresponding to two adjacent target video frames.
- the motion difference information between the five feature information that is, can be used to indicate the motion difference of the target object in two target video frames to determine the motion of the target object.
- the process of subtraction can obtain T-1 sixth feature information, and the fifth feature information corresponding to the T-th target video frame and the matrix with all 0 parameters can be subjected to the third convolution process Subtract or directly subtract from the matrix with all 0 parameters to obtain the sixth feature information corresponding to the T-th target video frame, or use the matrix with all 0s as the sixth feature information to obtain the Sixth feature information corresponding to T target video frames, that is, a total of T sixth feature information corresponding to T target video frames can be obtained. Further, the T sixth feature information can be combined to obtain the sixth feature information with a dimension of T ⁇ C/16 ⁇ H ⁇ W.
- the sixth feature information with dimensions of T ⁇ C/16 ⁇ H ⁇ W can be subjected to feature extraction processing.
- the first 2D convolutional layer with a convolution kernel of 1 ⁇ 1 can be used to extract features.
- the dimensionality of the six feature information can be upgraded.
- the number of channels can be upgraded, and the number of channels C/16 can be upgraded to C to obtain the motion feature information.
- the dimensions of the motion feature information are consistent with the dimensions of the spatiotemporal feature information. T ⁇ C ⁇ H ⁇ W.
- the action recognition feature of the i-th level may be obtained according to the spatiotemporal feature information, the motion feature information, and the action recognition feature of the i-1th level.
- this step may include: summing the spatiotemporal feature information and the motion feature information to obtain seventh feature information; performing a fourth convolution process on the seventh feature information, and combining with the The action recognition features of level i-1 are summed to obtain the action recognition features of the i-th level.
- the dimensions of the spatio-temporal feature information and the motion feature information are the same, both of which are T ⁇ C ⁇ H ⁇ W.
- Multiple feature information (for example, each feature map Or feature matrix) respectively sum to obtain the seventh feature information, the dimension of the seventh feature information is T ⁇ C ⁇ H ⁇ W.
- the seventh feature information can be subjected to the fourth convolution processing, for example, the fourth convolution processing can be performed on the seventh feature information through a 2D convolution layer with a 1 ⁇ 1 convolution kernel.
- the seventh feature information can be upgraded, and the dimension of the seventh feature information can be transformed into the same dimension as the action recognition feature of the i-1th level. For example, the number of channels can be increased by 16 times. Further, the processing result of the fourth convolution process can be summed with the action recognition feature of the i-1th level to obtain the action recognition feature of the i-th level.
- the first-level action recognition network can sum the feature map of the target video frame and the processing result of the fourth convolution processing to obtain the first-level action recognition feature, and the first-level action recognition feature Can be used as the input information of the second-level dynamic recognition network.
- the motion feature information can be obtained by subtracting the fifth feature information from the previous fifth feature information after performing the third convolution processing on the fifth feature information, which can simplify calculations and improve processing efficiency.
- the action recognition features can be obtained step by step in the above manner, and the action recognition features of the M-1th level can be processed through the M-th action recognition network in the above manner to obtain the multiple The action recognition feature of the target video frame, that is, the M-th level action recognition feature is used as the action recognition feature of the target video frame.
- the classification result of the to-be-processed video frame may be obtained according to the action recognition features of multiple target video frames.
- Step S13 may include: performing full connection processing on the action recognition features of each target video frame to obtain classification information of each target video frame; performing average processing on the classification information of each target video frame to obtain the classification result of the to-be-processed video .
- the action recognition feature of each target video frame can be fully connected through the fully connected layer of the neural network to obtain the classification information of each target video frame.
- each target video frame The classification information of can be feature vectors, that is, the fully connected layer can output T feature vectors. Further, the T feature vectors may be averaged to obtain the classification result of the video to be processed.
- the classification result may also be a feature vector, which may represent the probability of the category of the video to be processed.
- the classification result may be a 400-dimensional vector, which includes 400 parameters, which respectively represent the probabilities that the video to be processed belongs to 400 categories.
- the category may be the category of the target object's actions in the video to be processed, for example, actions such as walking, raising a glass, eating, etc.
- the value of the second parameter is the largest, that is, the probability that the video to be processed belongs to the second category is the largest. It can be determined that the video to be processed belongs to the second category, for example, the target in the video to be processed can be determined Subject is walking.
- the present disclosure does not limit the types and dimensions of the classification results.
- the target video frame can be determined from multiple video frames of the video to be processed, and then the target video frame can be processed, which can save computing resources and improve processing efficiency.
- Each level of action recognition network can obtain the spatio-temporal information of each channel, so that the spatio-temporal information is complete, and the dimension of the first feature information can be changed through reconstruction processing.
- Convolution processing can be performed in a less computationally intensive manner, and the fifth The feature information is subtracted from the previous fifth feature information after the third convolution processing to obtain the motion feature information, which can simplify the calculation.
- the action recognition results of each level of action recognition network can be obtained, and then the classification results of the video to be processed can be obtained.
- Obtaining spatio-temporal feature information and motion feature information reduces input parameters, reduces the amount of calculation, improves processing efficiency, can perform online real-time classification of the video to be processed, and improves the practicability of the video processing method.
- the video processing method may be implemented by a neural network, and the neural network includes at least the feature extraction network and the M-level action recognition network.
- the neural network may further include the fully connected layer to perform fully connected processing on the action recognition feature.
- Fig. 6 shows a flowchart of a video processing method according to an embodiment of the present disclosure. As shown in Fig. 6, the method further includes:
- step S15 the neural network is trained through the sample video and the category label of the sample video.
- step S15 may include: determining a plurality of sample video frames from the sample video; processing the sample video frames through the neural network to determine the classification result of the sample video Determine the network loss of the neural network according to the classification result and category label of the sample video; adjust the network parameters of the neural network according to the network loss.
- the sample video may include multiple video frames, and the sample video frame may be determined from the multiple video frames of the sample video. For example, random sampling may be performed or the sample video may be divided into multiple video frames. Video segment, and sampling in each video segment to obtain the sample video frame.
- the sample video frames can be input to the neural network, and the feature extraction network performs feature extraction processing, and the M-level action recognition network performs action recognition processing. After the layer is fully connected, the classification information of each sample video frame can be obtained, and the classification information of each sample video frame is averaged to obtain the classification result of the sample video.
- the classification result may be a multi-dimensional vector (which may have errors) representing the classification of the sample video.
- the sample video may have a category label, which may represent the actual category of the sample video (no error).
- the network loss of the neural network can be determined according to the classification result and the category label, for example, the cosine distance or the Euclidean distance between the classification result and the category label can be determined, and the difference between the cosine distance or the Euclidean distance and 0 can be determined Network loss. This disclosure does not limit the method of determining network losses.
- the network parameters of the neural network can be adjusted according to the network loss.
- the gradient of the network loss to the parameters of the neural network can be determined, and in the direction of minimizing the network loss, pass Gradient descent method to adjust each network parameter.
- the network parameters can be adjusted multiple times (that is, multiple training cycles are performed through multiple sample videos) in the above manner, and the trained neural network can be obtained when the training conditions are met.
- the training condition may include the number of training times (ie, the number of training cycles), for example, when the number of training times reaches a preset number, the training condition is satisfied.
- the training condition may include the size or convergence and divergence of the network loss. For example, when the network loss is less than or equal to a loss threshold or converges within a preset interval, the training condition is satisfied.
- the present disclosure does not limit the training conditions.
- Fig. 7 shows an application schematic diagram of a video processing method according to an embodiment of the present disclosure.
- the video to be processed may be any video that includes one or more target objects, and T target video frames can be determined from multiple video frames of the video to be processed through sampling or the like.
- the video to be processed can be divided into T (for example, T is 8 or 16) video segments, and a video frame is randomly sampled as the target video frame in each video segment.
- the feature extraction network of the neural network can be used to perform feature extraction on multiple target video frames.
- the feature extraction network can include one or more convolutional layers, which can convolve multiple target video frames. Process to obtain feature maps of multiple target video frames. For example, in T target video frames, each target video frame can be divided into C channels (for example, R, G and B three channels) and input to the feature extraction network.
- the size of the target video frame is H ⁇ W (for example , 224 ⁇ 224), after the feature extraction process, the values of C, H and W can all change.
- the feature map can be processed by an M-level action recognition network.
- the M-level action recognition network can be a cascaded M action recognition network.
- the network structure of each action recognition network is the same and all Is part of the neural network.
- the M-level action recognition network can be composed of multiple groups, and each group can have a neural network level such as a convolutional layer or an activation layer, or there can be no neural network level between the groups, and each group of action recognition
- the networks can be directly cascaded, and the total number of action recognition networks in each group is M.
- the first-level action recognition network can process the T groups of feature maps to obtain the first-level action recognition features, and the first-level action recognition features can be used as input information for the second-level action recognition network ,
- the second-level action recognition network can process the first-level action recognition features to obtain the second-level action recognition features, and can use the second-level action recognition features as the input information of the third-level action recognition network...
- the i-th level action recognition network can process the action recognition features of the i-1th level as input information, which can be converted to 1 ⁇ by the convolution kernel.
- the 2D convolution layer of 1 performs the first convolution processing on the action recognition feature of the i-1 level, and can reduce the dimension of the action recognition feature of the i-1 level to obtain the first feature information.
- the i-th level action recognition network can perform spatiotemporal feature extraction processing and motion feature extraction processing on the first feature information, for example, can be divided into spatiotemporal feature extraction branches and motion feature extraction branches for processing separately .
- the spatiotemporal feature extraction branch may first reconstruct the first feature information.
- the feature matrix of the first feature information may be reconstructed into a row vector or a column vector to obtain the second feature.
- the second convolution process is performed on each channel of the second feature information through a 1D convolution layer with a convolution kernel of 3 ⁇ 1, and the third feature information is obtained when the amount of calculation is small.
- the third feature information can be reconstructed to obtain the fourth feature information in the form of a matrix, and the fourth feature information can be convolved through a 2D convolution layer with a 3 ⁇ 3 convolution kernel to obtain the Temporal and spatial characteristics information.
- the motion feature extraction branch may first perform dimensionality reduction processing on the channel of the first feature information through a 2D convolution layer with a convolution kernel of 1 ⁇ 1.
- the first feature information may be The number of channels C is reduced to C/16, and the fifth feature information corresponding to each target video frame is obtained.
- the fifth feature information corresponding to the k-th target video frame can be processed by a 2D convolution layer with a convolution kernel of 3 ⁇ 3. Pass the third convolution processing, and subtract the result obtained by the third convolution processing from the fifth feature information k to obtain the sixth feature information corresponding to the k-th target video frame.
- the sixth feature information corresponding to the k-th target video frame can be obtained by the above method. -1
- the sixth feature information corresponding to the target video frame, and the fifth feature information corresponding to the T-th target video frame can be subtracted from the result of the third convolution process after the fifth feature information corresponding to the T-th target video frame and the matrix with all 0 parameters are processed to obtain
- the sixth feature information corresponding to the T-th target video frame that is, T sixth feature information can be obtained.
- T pieces of sixth feature information can be combined, and the sixth feature information can be upscaled through a 2D convolution layer with a convolution kernel of 1 ⁇ 1 to obtain motion feature information.
- the spatio-temporal feature information and the motion feature information can be summed to obtain the seventh feature information, and the seventh feature information can be processed through a 2D convolution layer with a convolution kernel of 1 ⁇ 1.
- the four-convolution process can increase the dimension of the seventh feature information, transform the dimension of the seventh feature information into the same dimension as the i-1 level action recognition feature, and perform the same as the i-1 level action recognition feature Sum, obtain the i-th action recognition feature.
- the action recognition feature output by the M-th level action recognition network can be determined as the action recognition feature of the target video frame, and the action recognition feature of the target video frame can be input into the fully connected layer of the neural network for processing , Obtain the classification information corresponding to each target video frame, for example, classification information 1, classification information 2...
- the classification information may be a vector, and the classification information corresponding to T target video frames can be averaged, Obtain the classification result of the video to be processed.
- the classification result is also a vector, which can represent the probability of the category of the video to be processed.
- the classification result may be a 400-dimensional vector, which includes 400 parameters, which respectively represent the probability that the video to be processed belongs to 400 categories.
- the category may be the category of the target object's actions in the video to be processed, for example, actions such as walking, raising a glass, eating, etc.
- the value of the second parameter is the largest, indicating that the probability of the video to be processed belongs to the second category is the largest, and it can be determined that the video to be processed belongs to the second category.
- the video processing method can recognize similar actions, such as closing and opening actions, sunset and sunrise actions, etc., through spatiotemporal feature information and action feature information, and the video processing method
- the amount of calculation is small and the processing efficiency is high.
- It can be used in real-time classification of videos. For example, it can be used for prison monitoring to determine whether a criminal suspect has escaped in real time; it can be used for subway monitoring to determine the operation of subway vehicles in real time.
- the status can be the status of passenger flow; it can be used in the field of security and can be used to determine in real time whether someone is performing dangerous actions in the monitoring area.
- the present disclosure does not limit the application field of the video processing method.
- Fig. 8 shows a block diagram of a video processing device according to an embodiment of the present disclosure. As shown in Fig. 8, the video processing device includes:
- the feature extraction module 11 is configured to perform feature extraction on multiple target video frames of the video to be processed through a feature extraction network to obtain feature maps of the multiple target video frames;
- the action recognition module 12 is configured to perform action recognition processing on the feature maps of the multiple target video frames through an M-level action recognition network to obtain the action recognition features of the multiple target video frames, where M is greater than or equal to 1.
- the motion recognition processing includes spatiotemporal feature extraction processing based on the feature maps of the multiple target video frames, and motion feature extraction processing based on motion difference information between the feature maps of the multiple target video frames,
- the action recognition feature includes spatiotemporal feature information and motion feature information;
- the classification module 13 is configured to determine the classification result of the to-be-processed video according to the action recognition features of the multiple target video frames.
- the action recognition module is further configured to: process the feature maps of the multiple target video frames through the first-level action recognition network to obtain the first-level action recognition features;
- the i-th action recognition network processes the action recognition features of the i-1 level to obtain the action recognition features of the i-th level, where i is an integer and 1 ⁇ i ⁇ M, where the action recognition features of each level are respectively the same as those described above
- the feature maps of multiple target video frames correspond; the M-1 level action recognition features are processed through the M level action recognition network to obtain the action recognition features of the multiple target video frames.
- the action recognition module is further configured to: perform first convolution processing on the i-1th level action recognition feature to obtain first feature information, where the first The feature information respectively corresponds to the feature maps of the multiple target video frames; performing spatiotemporal feature extraction processing on the first feature information to obtain spatiotemporal feature information; performing motion feature extraction processing on the first feature information to obtain motion features Information; at least according to the spatio-temporal feature information and the motion feature information, obtain the i-th level of action recognition features.
- the action recognition module is further configured to: obtain the i-th level according to the spatio-temporal feature information, the motion feature information, and the i-1th level action recognition feature Action recognition features.
- the action recognition module is further configured to: perform dimensional reconstruction processing on the first feature information corresponding to the feature maps of the multiple target video frames to obtain the second feature information, The second feature information and the first feature information have different dimensions; the second convolution processing is performed on each channel of the second feature information to obtain the third feature information, where the third feature information represents Time features of feature maps of the multiple target video frames; performing dimensional reconstruction processing on the third feature information to obtain fourth feature information, where the fourth feature information has the same dimension as the first feature information; Perform spatial feature extraction processing on the fourth feature information to obtain the spatiotemporal feature information.
- the first feature information includes a plurality of row vectors or column vectors
- the action recognition module is further configured to perform processing on the plurality of row vectors or column vectors of the first feature information.
- the splicing process obtains the second characteristic information, where the second characteristic information includes a row vector or a column vector.
- the action recognition module is further configured to: perform dimensionality reduction processing on the channel of the first feature information to obtain fifth feature information, wherein the fifth feature information is different from the Each target video frame in the to-be-processed video corresponds; the fifth feature information corresponding to the k+1th target video frame is subjected to third convolution processing, and the fifth feature corresponding to the kth target video frame The information is subtracted to obtain the sixth feature information corresponding to the kth target video frame, where k is an integer and 1 ⁇ k ⁇ T, T is the number of target video frames, and T is an integer greater than 1, the The sixth feature information represents the motion difference information between the fifth feature information corresponding to the k+1th target video frame and the fifth feature information corresponding to the kth target video frame; The sixth feature information corresponding to the video frame is subjected to feature extraction processing to obtain the motion feature information.
- the action recognition module is further configured to: perform summation processing on the spatiotemporal feature information and the motion feature information to obtain seventh feature information;
- the fourth convolution processing is performed, and the sum processing is performed with the action recognition feature of the i-1th level to obtain the action recognition feature of the i-th level.
- the classification module is further configured to: perform full connection processing on the action recognition features of each target video frame to obtain classification information of each target video frame; and perform classification information of each target video frame Perform averaging processing to obtain the classification result of the to-be-processed video.
- Fig. 9 shows a block diagram of a video processing device according to an embodiment of the present disclosure. As shown in Fig. 9, the video processing device further includes:
- the determining module 14 is used to determine multiple target video frames from the video to be processed.
- the determining module is further configured to: divide the to-be-processed video into multiple video segments; randomly determine at least one target video frame from each video segment, and obtain multiple target videos frame.
- the video processing method is implemented by a neural network
- the neural network includes at least the feature extraction network, the M-level action recognition network
- the device further includes: a training module 15
- the neural network is trained through the sample video and the category label of the sample video.
- the training module is further configured to: determine multiple sample video frames from the sample video; process the sample video frames through the neural network to determine the sample The classification result of the video; the network loss of the neural network is determined according to the classification result and the category label of the sample video; the network parameter of the neural network is adjusted according to the network loss.
- the present disclosure also provides video processing devices, electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any video processing method provided in the present disclosure.
- video processing devices electronic equipment, computer-readable storage media, and programs, all of which can be used to implement any video processing method provided in the present disclosure.
- the writing order of the steps does not mean a strict execution order but constitutes any limitation on the implementation process.
- the specific execution order of each step should be based on its function and possibility.
- the inner logic is determined.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- the functions or modules contained in the device provided in the embodiments of the present disclosure can be used to execute the methods described in the above method embodiments.
- brevity, here No longer refer to the description of the above method embodiments.
- the embodiments of the present disclosure also provide a computer-readable storage medium on which computer program instructions are stored, and the computer program instructions implement the above-mentioned method when executed by a processor.
- the computer-readable storage medium may be a non-volatile computer-readable storage medium.
- An embodiment of the present disclosure also provides an electronic device, including: a processor; a memory for storing executable instructions of the processor; wherein the processor is configured as the above method.
- the electronic device can be provided as a terminal, server or other form of device.
- Fig. 10 is a block diagram showing an electronic device 800 according to an exemplary embodiment.
- the electronic device 800 may be a mobile phone, a computer, a digital broadcasting terminal, a messaging device, a game console, a tablet device, a medical device, a fitness device, a personal digital assistant, and other terminals.
- the electronic device 800 may include one or more of the following components: a processing component 802, a memory 804, a power component 806, a multimedia component 808, an audio component 810, an input/output (I/O) interface 812, and a sensor component 814 , And communication component 816.
- the processing component 802 generally controls the overall operations of the electronic device 800, such as operations associated with display, telephone calls, data communications, camera operations, and recording operations.
- the processing component 802 may include one or more processors 820 to execute instructions to complete all or part of the steps of the foregoing method.
- the processing component 802 may include one or more modules to facilitate the interaction between the processing component 802 and other components.
- the processing component 802 may include a multimedia module to facilitate the interaction between the multimedia component 808 and the processing component 802.
- the memory 804 is configured to store various types of data to support operations in the electronic device 800. Examples of these data include instructions for any application or method operating on the electronic device 800, contact data, phone book data, messages, pictures, videos, etc.
- the memory 804 can be implemented by any type of volatile or nonvolatile storage device or a combination thereof, such as static random access memory (SRAM), electrically erasable programmable read-only memory (EEPROM), erasable Programmable Read Only Memory (EPROM), Programmable Read Only Memory (PROM), Read Only Memory (ROM), Magnetic Memory, Flash Memory, Magnetic Disk or Optical Disk.
- SRAM static random access memory
- EEPROM electrically erasable programmable read-only memory
- EPROM erasable Programmable Read Only Memory
- PROM Programmable Read Only Memory
- ROM Read Only Memory
- Magnetic Memory Flash Memory
- Magnetic Disk Magnetic Disk or Optical Disk.
- the power supply component 806 provides power for various components of the electronic device 800.
- the power supply component 806 may include a power management system, one or more power supplies, and other components associated with the generation, management, and distribution of power for the electronic device 800.
- the multimedia component 808 includes a screen that provides an output interface between the electronic device 800 and the user.
- the screen may include a liquid crystal display (LCD) and a touch panel (TP). If the screen includes a touch panel, the screen may be implemented as a touch screen to receive input signals from the user.
- the touch panel includes one or more touch sensors to sense touch, sliding, and gestures on the touch panel. The touch sensor may not only sense the boundary of a touch or slide action, but also detect the duration and pressure related to the touch or slide operation.
- the multimedia component 808 includes a front camera and/or a rear camera. When the electronic device 800 is in an operation mode, such as a shooting mode or a video mode, the front camera and/or the rear camera can receive external multimedia data. Each front camera and rear camera can be a fixed optical lens system or have focal length and optical zoom capabilities.
- the audio component 810 is configured to output and/or input audio signals.
- the audio component 810 includes a microphone (MIC).
- the microphone is configured to receive external audio signals.
- the received audio signal may be further stored in the memory 804 or transmitted via the communication component 816.
- the audio component 810 further includes a speaker for outputting audio signals.
- the I/O interface 812 provides an interface between the processing component 802 and a peripheral interface module.
- the peripheral interface module may be a keyboard, a click wheel, a button, and the like. These buttons may include but are not limited to: home button, volume button, start button, and lock button.
- the sensor component 814 includes one or more sensors for providing the electronic device 800 with various aspects of state evaluation.
- the sensor component 814 can detect the on/off status of the electronic device 800 and the relative positioning of the components.
- the component is the display and the keypad of the electronic device 800.
- the sensor component 814 can also detect the electronic device 800 or the electronic device 800.
- the position of the component changes, the presence or absence of contact between the user and the electronic device 800, the orientation or acceleration/deceleration of the electronic device 800, and the temperature change of the electronic device 800.
- the sensor component 814 may include a proximity sensor configured to detect the presence of nearby objects when there is no physical contact.
- the sensor component 814 may also include a light sensor, such as a CMOS or CCD image sensor, for use in imaging applications.
- the sensor component 814 may also include an acceleration sensor, a gyroscope sensor, a magnetic sensor, a pressure sensor or a temperature sensor.
- the communication component 816 is configured to facilitate wired or wireless communication between the electronic device 800 and other devices.
- the electronic device 800 can access a wireless network based on a communication standard, such as WiFi, 2G, or 3G, or a combination thereof.
- the communication component 816 receives a broadcast signal or broadcast related information from an external broadcast management system via a broadcast channel.
- the communication component 816 further includes a near field communication (NFC) module to facilitate short-range communication.
- the NFC module can be implemented based on radio frequency identification (RFID) technology, infrared data association (IrDA) technology, ultra-wideband (UWB) technology, Bluetooth (BT) technology and other technologies.
- RFID radio frequency identification
- IrDA infrared data association
- UWB ultra-wideband
- Bluetooth Bluetooth
- the electronic device 800 can be implemented by one or more application specific integrated circuits (ASIC), digital signal processors (DSP), digital signal processing devices (DSPD), programmable logic devices (PLD), field A programmable gate array (FPGA), controller, microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- ASIC application specific integrated circuits
- DSP digital signal processors
- DSPD digital signal processing devices
- PLD programmable logic devices
- FPGA field A programmable gate array
- controller microcontroller, microprocessor, or other electronic components are implemented to implement the above methods.
- a non-volatile computer-readable storage medium such as a memory 804 including computer program instructions, which can be executed by the processor 820 of the electronic device 800 to complete the foregoing method.
- the embodiments of the present disclosure also provide a computer program product, including computer readable code, and when the computer readable code runs on the device, the processor in the device executes instructions for implementing the method provided in any of the above embodiments.
- the computer program product can be specifically implemented by hardware, software or a combination thereof.
- the computer program product is specifically embodied as a computer storage medium.
- the computer program product is specifically embodied as a software product, such as a software development kit (SDK), etc. Wait.
- SDK software development kit
- Fig. 11 is a block diagram showing an electronic device 1900 according to an exemplary embodiment.
- the electronic device 1900 may be provided as a server.
- the electronic device 1900 includes a processing component 1922, which further includes one or more processors, and a memory resource represented by a memory 1932, for storing instructions that can be executed by the processing component 1922, such as application programs.
- the application program stored in the memory 1932 may include one or more modules each corresponding to a set of instructions.
- the processing component 1922 is configured to execute instructions to perform the above-described methods.
- the electronic device 1900 may also include a power supply component 1926 configured to perform power management of the electronic device 1900, a wired or wireless network interface 1950 configured to connect the electronic device 1900 to a network, and an input output (I/O) interface 1958 .
- the electronic device 1900 can operate based on an operating system stored in the memory 1932, such as Windows ServerTM, Mac OS XTM, UnixTM, LinuxTM, FreeBSDTM or the like.
- a non-volatile computer-readable storage medium such as the memory 1932 including computer program instructions, which can be executed by the processing component 1922 of the electronic device 1900 to complete the foregoing method.
- the present disclosure may be a system, method, and/or computer program product.
- the computer program product may include a computer-readable storage medium loaded with computer-readable program instructions for enabling a processor to implement various aspects of the present disclosure.
- the computer-readable storage medium may be a tangible device that can hold and store instructions used by the instruction execution device.
- the computer-readable storage medium may be, for example, but not limited to, an electrical storage device, a magnetic storage device, an optical storage device, an electromagnetic storage device, a semiconductor storage device, or any suitable combination of the foregoing.
- Computer-readable storage media include: portable computer disks, hard disks, random access memory (RAM), read-only memory (ROM), erasable programmable read-only memory (EPROM) Or flash memory), static random access memory (SRAM), portable compact disk read-only memory (CD-ROM), digital versatile disk (DVD), memory stick, floppy disk, mechanical encoding device, such as a printer with instructions stored thereon
- RAM random access memory
- ROM read-only memory
- EPROM erasable programmable read-only memory
- flash memory flash memory
- SRAM static random access memory
- CD-ROM compact disk read-only memory
- DVD digital versatile disk
- memory stick floppy disk
- mechanical encoding device such as a printer with instructions stored thereon
- the computer-readable storage medium used here is not interpreted as a transient signal itself, such as radio waves or other freely propagating electromagnetic waves, electromagnetic waves propagating through waveguides or other transmission media (for example, light pulses through fiber optic cables), or through wires Transmission of electrical signals.
- the computer-readable program instructions described herein can be downloaded from a computer-readable storage medium to various computing/processing devices, or downloaded to an external computer or external storage device via a network, such as the Internet, a local area network, a wide area network, and/or a wireless network.
- the network may include copper transmission cables, optical fiber transmission, wireless transmission, routers, firewalls, switches, gateway computers, and/or edge servers.
- the network adapter card or network interface in each computing/processing device receives computer-readable program instructions from the network, and forwards the computer-readable program instructions for storage in the computer-readable storage medium in each computing/processing device .
- the computer program instructions used to perform the operations of the present disclosure may be assembly instructions, instruction set architecture (ISA) instructions, machine instructions, machine-related instructions, microcode, firmware instructions, status setting data, or in one or more programming languages.
- Source code or object code written in any combination, the programming language includes object-oriented programming languages such as Smalltalk, C++, etc., and conventional procedural programming languages such as "C" language or similar programming languages.
- Computer-readable program instructions can be executed entirely on the user's computer, partly on the user's computer, executed as a stand-alone software package, partly on the user's computer and partly executed on a remote computer, or entirely on the remote computer or server carried out.
- the remote computer can be connected to the user's computer through any kind of network, including a local area network (LAN) or a wide area network (WAN), or it can be connected to an external computer (for example, using an Internet service provider to access the Internet connection).
- LAN local area network
- WAN wide area network
- an electronic circuit such as a programmable logic circuit, a field programmable gate array (FPGA), or a programmable logic array (PLA), can be customized by using the status information of the computer-readable program instructions.
- the computer-readable program instructions are executed to realize various aspects of the present disclosure.
- These computer-readable program instructions can be provided to the processor of a general-purpose computer, a special-purpose computer, or other programmable data processing device, thereby producing a machine such that when these instructions are executed by the processor of the computer or other programmable data processing device , A device that implements the functions/actions specified in one or more blocks in the flowchart and/or block diagram is produced. It is also possible to store these computer-readable program instructions in a computer-readable storage medium. These instructions make computers, programmable data processing apparatuses, and/or other devices work in a specific manner, so that the computer-readable medium storing instructions includes An article of manufacture, which includes instructions for implementing various aspects of the functions/actions specified in one or more blocks in the flowchart and/or block diagram.
- each block in the flowchart or block diagram may represent a module, program segment, or part of an instruction, and the module, program segment, or part of an instruction contains one or more functions for implementing the specified logical function.
- Executable instructions may also occur in a different order from the order marked in the drawings. For example, two consecutive blocks can actually be executed in parallel, or they can sometimes be executed in the reverse order, depending on the functions involved.
- each block in the block diagram and/or flowchart, and the combination of the blocks in the block diagram and/or flowchart can be implemented by a dedicated hardware-based system that performs the specified functions or actions Or it can be realized by a combination of dedicated hardware and computer instructions.
Landscapes
- Engineering & Computer Science (AREA)
- Theoretical Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Evolutionary Computation (AREA)
- Multimedia (AREA)
- Software Systems (AREA)
- Artificial Intelligence (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Data Mining & Analysis (AREA)
- Computing Systems (AREA)
- General Health & Medical Sciences (AREA)
- Health & Medical Sciences (AREA)
- Life Sciences & Earth Sciences (AREA)
- Computational Linguistics (AREA)
- General Engineering & Computer Science (AREA)
- Medical Informatics (AREA)
- Databases & Information Systems (AREA)
- Biomedical Technology (AREA)
- Biophysics (AREA)
- Molecular Biology (AREA)
- Mathematical Physics (AREA)
- Bioinformatics & Cheminformatics (AREA)
- Bioinformatics & Computational Biology (AREA)
- Evolutionary Biology (AREA)
- Image Analysis (AREA)
Abstract
Description
Claims (29)
- 一种视频处理方法,其特征在于,包括:通过特征提取网络对待处理视频的多个目标视频帧进行特征提取,获得所述多个目标视频帧的特征图;通过M级动作识别网络对所述多个目标视频帧的特征图进行动作识别处理,获得所述多个目标视频帧的动作识别特征,其中,M为大于或等于1的整数,所述动作识别处理包括基于所述多个目标视频帧的特征图的时空特征提取处理,以及基于所述多个目标视频帧的特征图之间的运动差异信息的运动特征提取处理,所述动作识别特征包括时空特征信息和运动特征信息;根据所述多个目标视频帧的动作识别特征,确定所述待处理视频的分类结果。
- 根据权利要求1所述的方法,其特征在于,通过M级动作识别网络对所述多个目标视频帧的特征图进行动作识别,获得所述多个目标视频帧的动作识别特征,包括:通过第一级动作识别网络对所述多个目标视频帧的特征图进行处理,获得第一级的动作识别特征;通过第i级动作识别网络对第i-1级的动作识别特征进行处理,获得第i级的动作识别特征,i为整数且1<i<M,其中,各级的动作识别特征分别与所述多个目标视频帧的特征图对应;通过第M级动作识别网络对第M-1级的动作识别特征进行处理,获得所述多个目标视频帧的动作识别特征。
- 根据权利要求2所述的方法,其特征在于,通过第i级动作识别网络对第i-1级的动作识别特征进行处理,获得第i级的动作识别特征,包括:对所述第i-1级的动作识别特征进行第一卷积处理,获得第一特征信息,其中,所述第一特征信息分别与所述多个目标视频帧的特征图对应;对所述第一特征信息进行时空特征提取处理,获得时空特征信息;对所述第一特征信息进行运动特征提取处理,获得运动特征信息;至少根据所述时空特征信息和所述运动特征信息,获得所述第i级的动作识别特征。
- 根据权利要求3所述的方法,其特征在于,所述至少根据所述时空特征信息和所述运动特征信息,获得所述第i级的动作识别特征,包括:根据所述时空特征信息、所述运动特征信息和所述第i-1级的动作识别特征,获得所述第i级的动作识别特征。
- 根据权利要求3所述的方法,其特征在于,对所述第一特征信息进行时空特征提取处理,获得时空特征信息,包括:分别对与所述多个目标视频帧的特征图对应的第一特征信息进行维度重构处理,获得第二特征信息,所述第二特征信息与所述第一特征信息的维度不同;对所述第二特征信息的各通道分别进行第二卷积处理,获得第三特征信息,其中,所述第三特征信息表示所述多个目标视频帧的特征图的时间特征;对所述第三特征信息进行维度重构处理,获得第四特征信息,所述第四特征信息与所述第一特征信息的维度相同;对所述第四特征信息进行空间特征提取处理,获得所述时空特征信息。
- 根据权利要求5所述的方法,其特征在于,所述第一特征信息包括多个行向量或列向量,分别对与所述多个目标视频帧的特征图对应的第一特征信息进行维度重构处理,包括:对所述第一特征信息的多个行向量或列向量进行拼接处理,获得所述第二特征信息,其中,所述第二特征信息包括一个行向量或列向量。
- 根据权利要求3-6中任一项所述的方法,其特征在于,对所述第一特征信息进行运动特征提取处理,获得运动特征信息,包括:对所述第一特征信息的通道进行降维处理,获得第五特征信息,其中,所述第五特征信息分别与所述待处理视频中的各目标视频帧对应;将与第k+1个目标视频帧对应的第五特征信息进行第三卷积处理,并与所述第k个目标视频帧对应 的第五特征信息相减,获得与第k个目标视频帧对应的第六特征信息,其中,k为整数且1≤k<T,T为目标视频帧的数量,且T为大于1的整数,所述第六特征信息表示与所述第k+1个目标视频帧对应的第五特征信息及与所述第k个目标视频帧对应的第五特征信息之间的运动差异信息;将与所述各目标视频帧对应的第六特征信息进行特征提取处理,获得所述运动特征信息。
- 根据权利要求4-7中任意一项所述的方法,其特征在于,根据所述时空特征信息、所述运动特征信息和所述第i-1级的动作识别特征,获得所述第i级的动作识别特征,包括:对所述时空特征信息和所述运动特征信息进行求和处理,获得第七特征信息;对所述第七特征信息进行第四卷积处理,并与所述第i-1级的动作识别特征进行求和处理,获得所述第i级的动作识别特征。
- 根据权利要求1-8中任意一项所述的方法,其特征在于,根据所述多个目标视频帧的动作识别特征,确定所述待处理视频的分类结果,包括:对各目标视频帧的动作识别特征分别进行全连接处理,获得各目标视频帧的分类信息;对各目标视频帧的分类信息进行平均处理,获得所述待处理视频的分类结果。
- 根据权利要求1-9中任意一项所述的方法,其特征在于,所述方法还包括:从待处理视频中确定出多个目标视频帧。
- 根据权利要求10所述的方法,其特征在于,从待处理视频的多个视频帧中确定出多个目标视频帧,包括:将所述待处理视频划分为多个视频片段;从各视频片段中随机确定出至少一个目标视频帧,获得多个目标视频帧。
- 根据权利要求1-11中任一项所述的方法,其特征在于,所述视频处理方法通过神经网络实现,所述神经网络至少包括所述特征提取网络、所述M级动作识别网络,所述方法还包括:通过样本视频及所述样本视频的类别标注,对所述神经网络进行训练。
- 根据权利要求12所述的方法,其特征在于,通过样本视频及所述样本视频的类别标注,对所述神经网络进行训练,包括:从所述样本视频中确定出多个样本视频帧;通过所述神经网络对所述样本视频帧进行处理,确定所述样本视频的分类结果;根据所述样本视频的分类结果及类别标注,确定所述神经网络的网络损失;根据所述网络损失调整所述神经网络的网络参数。
- 一种视频处理装置,其特征在于,包括:特征提取模块,用于通过特征提取网络对待处理视频的多个目标视频帧进行特征提取,获得所述多个目标视频帧的特征图;动作识别模块,用于通过M级动作识别网络对所述多个目标视频帧的特征图进行动作识别处理,获得所述多个目标视频帧的动作识别特征,其中,M为大于或等于1的整数,所述动作识别处理包括基于所述多个目标视频帧的特征图的时空特征提取处理,以及基于所述多个目标视频帧的特征图之间的运动差异信息的运动特征提取处理,所述动作识别特征包括时空特征信息和运动特征信息;分类模块,用于根据所述多个目标视频帧的动作识别特征,确定所述待处理视频的分类结果。
- 根据权利要求14所述的装置,其特征在于,所述动作识别模块被进一步配置为:通过第一级动作识别网络对所述多个目标视频帧的特征图进行处理,获得第一级的动作识别特征;通过第i级动作识别网络对第i-1级的动作识别特征进行处理,获得第i级的动作识别特征,i为整数且1<i<M,其中,各级的动作识别特征分别与所述多个目标视频帧的特征图对应;通过第M级动作识别网络对第M-1级的动作识别特征进行处理,获得所述多个目标视频帧的动作识别特征。
- 根据权利要求15所述的装置,其特征在于,所述动作识别模块被进一步配置为:对所述第i-1级的动作识别特征进行第一卷积处理,获得第一特征信息,其中,所述第一特征信息分别与所述多个目标视频帧的特征图对应;对所述第一特征信息进行时空特征提取处理,获得时空特征信息;对所述第一特征信息进行运动特征提取处理,获得运动特征信息;至少根据所述时空特征信息和所述运动特征信息,获得所述第i级的动作识别特征。
- 根据权利要求16所述的装置,其特征在于,所述动作识别模块被进一步配置为:根据所述时空特征信息、所述运动特征信息和所述第i-1级的动作识别特征,获得所述第i级的动作识别特征。
- 根据权利要求16所述的装置,其特征在于,所述动作识别模块被进一步配置为:分别对与所述多个目标视频帧的特征图对应的第一特征信息进行维度重构处理,获得第二特征信息,所述第二特征信息与所述第一特征信息的维度不同;对所述第二特征信息的各通道分别进行第二卷积处理,获得第三特征信息,其中,所述第三特征信息表示所述多个目标视频帧的特征图的时间特征;对所述第三特征信息进行维度重构处理,获得第四特征信息,所述第四特征信息与所述第一特征信息的维度相同;对所述第四特征信息进行空间特征提取处理,获得所述时空特征信息。
- 根据权利要求18所述的装置,其特征在于,所述第一特征信息包括多个行向量或列向量,所述动作识别模块被进一步配置为:对所述第一特征信息的多个行向量或列向量进行拼接处理,获得所述第二特征信息,其中,所述第二特征信息包括一个行向量或列向量。
- 根据权利要求16-19中任一项所述的装置,其特征在于,所述动作识别模块被进一步配置为:对所述第一特征信息的通道进行降维处理,获得第五特征信息,其中,所述第五特征信息分别与所述待处理视频中的各目标视频帧对应;将与第k+1个目标视频帧对应的第五特征信息进行第三卷积处理,并与所述第k个目标视频帧对应的第五特征信息相减,获得与第k个目标视频帧对应的第六特征信息,其中,k为整数且1≤k<T,T为目标视频帧的数量,且T为大于1的整数,所述第六特征信息表示与所述第k+1个目标视频帧对应的第五特征信息及与所述第k个目标视频帧对应的第五特征信息之间的运动差异信息;将与所述各目标视频帧对应的第六特征信息进行特征提取处理,获得所述运动特征信息。
- 根据权利要求17-20中任一项所述的装置,其特征在于,所述动作识别模块被进一步配置为:对所述时空特征信息和所述运动特征信息进行求和处理,获得第七特征信息;对所述第七特征信息进行第四卷积处理,并与所述第i-1级的动作识别特征进行求和处理,获得所述第i级的动作识别特征。
- 根据权利要求14-21中任一项所述的装置,其特征在于,所述分类模块被进一步配置为:对各目标视频帧的动作识别特征分别进行全连接处理,获得各目标视频帧的分类信息;对各目标视频帧的分类信息进行平均处理,获得所述待处理视频的分类结果。
- 根据权利要求14-22中任一项所述的装置,其特征在于,所述装置还包括:确定模块,用于从待处理视频中确定出多个目标视频帧。
- 根据权利要求23所述的装置,其特征在于,所述确定模块被进一步配置为:将所述待处理视频划分为多个视频片段;从各视频片段中随机确定出至少一个目标视频帧,获得多个目标视频帧。
- 根据权利要求14-24中任一项所述的装置,其特征在于,所述视频处理方法通过神经网络实现,所述神经网络至少包括所述特征提取网络、所述M级动作识别网络,所述装置还包括:训练模块,用于通过样本视频及所述样本视频的类别标注,对所述神经网络进行训练。
- 根据权利要求25所述的装置,其特征在于,所述训练模块被进一步配置为:从所述样本视频中确定出多个样本视频帧;通过所述神经网络对所述样本视频帧进行处理,确定所述样本视频的分类结果;根据所述样本视频的分类结果及类别标注,确定所述神经网络的网络损失;根据所述网络损失调整所述神经网络的网络参数。
- 一种电子设备,其特征在于,包括:处理器;用于存储处理器可执行指令的存储器;其中,所述处理器被配置为调用所述存储器存储的指令,以执行权利要求1至13中任意一项所述的方法。
- 一种计算机可读存储介质,其上存储有计算机程序指令,其特征在于,所述计算机程序指令被处理器执行时实现权利要求1至13中任意一项所述的方法。
- 一种计算机程序,包括计算机可读代码,当所述计算机可读代码在电子设备中运行时,所述电子设备中的处理器执行用于实现权利要求1-13中的任一权利要求所述的方法。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
SG11202011781UA SG11202011781UA (en) | 2019-07-19 | 2019-11-29 | Video processing method, apparatus, electronic device and storage medium |
KR1020217017839A KR20210090238A (ko) | 2019-07-19 | 2019-11-29 | 비디오 처리 방법 및 장치, 전자 기기, 및 기억 매체 |
JP2020571778A JP7090183B2 (ja) | 2019-07-19 | 2019-11-29 | ビデオ処理方法及び装置、電子機器、並びに記憶媒体 |
US17/126,633 US20210103733A1 (en) | 2019-07-19 | 2020-12-18 | Video processing method, apparatus, and non-transitory computer-readable storage medium |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910656059.9 | 2019-07-19 | ||
CN201910656059.9A CN112241673B (zh) | 2019-07-19 | 2019-07-19 | 视频处理方法及装置、电子设备和存储介质 |
Related Child Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
US17/126,633 Continuation US20210103733A1 (en) | 2019-07-19 | 2020-12-18 | Video processing method, apparatus, and non-transitory computer-readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2021012564A1 true WO2021012564A1 (zh) | 2021-01-28 |
Family
ID=74167666
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2019/121975 WO2021012564A1 (zh) | 2019-07-19 | 2019-11-29 | 视频处理方法及装置、电子设备和存储介质 |
Country Status (7)
Country | Link |
---|---|
US (1) | US20210103733A1 (zh) |
JP (1) | JP7090183B2 (zh) |
KR (1) | KR20210090238A (zh) |
CN (1) | CN112241673B (zh) |
SG (1) | SG11202011781UA (zh) |
TW (1) | TWI738172B (zh) |
WO (1) | WO2021012564A1 (zh) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022259575A1 (ja) * | 2021-06-08 | 2022-12-15 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 学習装置、推論装置、学習方法、推論方法、及びプログラム |
CN116824641A (zh) * | 2023-08-29 | 2023-09-29 | 卡奥斯工业智能研究院(青岛)有限公司 | 姿态分类方法、装置、设备和计算机存储介质 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112906484B (zh) * | 2021-01-25 | 2023-05-12 | 北京市商汤科技开发有限公司 | 一种视频帧处理方法及装置、电子设备和存储介质 |
CN112926436A (zh) * | 2021-02-22 | 2021-06-08 | 上海商汤智能科技有限公司 | 行为识别方法及装置、电子设备和存储介质 |
CN113821675B (zh) * | 2021-06-30 | 2024-06-07 | 腾讯科技(北京)有限公司 | 视频识别方法、装置、电子设备及计算机可读存储介质 |
CN113486763A (zh) * | 2021-06-30 | 2021-10-08 | 上海商汤临港智能科技有限公司 | 车舱内人员冲突行为的识别方法及装置、设备和介质 |
US11960576B2 (en) * | 2021-07-20 | 2024-04-16 | Inception Institute of Artificial Intelligence Ltd | Activity recognition in dark video based on both audio and video content |
KR20230056366A (ko) * | 2021-10-20 | 2023-04-27 | 중앙대학교 산학협력단 | 딥러닝을 이용한 행동 인식 방법 및 그 장치 |
CN114743365A (zh) * | 2022-03-10 | 2022-07-12 | 慧之安信息技术股份有限公司 | 基于边缘计算的监狱智能监控系统和方法 |
CN114926761B (zh) * | 2022-05-13 | 2023-09-05 | 浪潮卓数大数据产业发展有限公司 | 一种基于时空平滑特征网络的动作识别方法 |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120314064A1 (en) * | 2011-06-13 | 2012-12-13 | Sony Corporation | Abnormal behavior detecting apparatus and method thereof, and video monitoring system |
CN108681695A (zh) * | 2018-04-26 | 2018-10-19 | 北京市商汤科技开发有限公司 | 视频动作识别方法及装置、电子设备和存储介质 |
CN108875611A (zh) * | 2018-06-05 | 2018-11-23 | 北京字节跳动网络技术有限公司 | 视频动作识别方法和装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20070250898A1 (en) * | 2006-03-28 | 2007-10-25 | Object Video, Inc. | Automatic extraction of secondary video streams |
US9202144B2 (en) * | 2013-10-30 | 2015-12-01 | Nec Laboratories America, Inc. | Regionlets with shift invariant neural patterns for object detection |
US10181195B2 (en) * | 2015-12-28 | 2019-01-15 | Facebook, Inc. | Systems and methods for determining optical flow |
US10157309B2 (en) * | 2016-01-14 | 2018-12-18 | Nvidia Corporation | Online detection and classification of dynamic gestures with recurrent convolutional neural networks |
US10497143B2 (en) * | 2016-11-14 | 2019-12-03 | Nec Corporation | Advanced driver-assistance system using accurate object proposals by tracking detections |
CN106650674B (zh) * | 2016-12-27 | 2019-09-10 | 广东顺德中山大学卡内基梅隆大学国际联合研究院 | 一种基于混合池化策略的深度卷积特征的动作识别方法 |
CN107169415B (zh) * | 2017-04-13 | 2019-10-11 | 西安电子科技大学 | 基于卷积神经网络特征编码的人体动作识别方法 |
CN110622169A (zh) * | 2017-05-15 | 2019-12-27 | 渊慧科技有限公司 | 用于视频中的动作识别的神经网络系统 |
CN107273800B (zh) * | 2017-05-17 | 2020-08-14 | 大连理工大学 | 一种基于注意机制的卷积递归神经网络的动作识别方法 |
CN108876813B (zh) * | 2017-11-01 | 2021-01-26 | 北京旷视科技有限公司 | 用于视频中物体检测的图像处理方法、装置及设备 |
CN108960059A (zh) * | 2018-06-01 | 2018-12-07 | 众安信息技术服务有限公司 | 一种视频动作识别方法及装置 |
CN108961317A (zh) * | 2018-07-27 | 2018-12-07 | 阿依瓦(北京)技术有限公司 | 一种视频深度分析的方法与系统 |
CN109376603A (zh) * | 2018-09-25 | 2019-02-22 | 北京周同科技有限公司 | 一种视频识别方法、装置、计算机设备及存储介质 |
CN109446923B (zh) * | 2018-10-10 | 2021-09-24 | 北京理工大学 | 基于训练特征融合的深度监督卷积神经网络行为识别方法 |
CN109800807B (zh) * | 2019-01-18 | 2021-08-31 | 北京市商汤科技开发有限公司 | 分类网络的训练方法及分类方法和装置、电子设备 |
-
2019
- 2019-07-19 CN CN201910656059.9A patent/CN112241673B/zh active Active
- 2019-11-29 JP JP2020571778A patent/JP7090183B2/ja active Active
- 2019-11-29 KR KR1020217017839A patent/KR20210090238A/ko not_active Application Discontinuation
- 2019-11-29 WO PCT/CN2019/121975 patent/WO2021012564A1/zh active Application Filing
- 2019-11-29 SG SG11202011781UA patent/SG11202011781UA/en unknown
-
2020
- 2020-01-07 TW TW109100421A patent/TWI738172B/zh active
- 2020-12-18 US US17/126,633 patent/US20210103733A1/en not_active Abandoned
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20120314064A1 (en) * | 2011-06-13 | 2012-12-13 | Sony Corporation | Abnormal behavior detecting apparatus and method thereof, and video monitoring system |
CN108681695A (zh) * | 2018-04-26 | 2018-10-19 | 北京市商汤科技开发有限公司 | 视频动作识别方法及装置、电子设备和存储介质 |
CN108875611A (zh) * | 2018-06-05 | 2018-11-23 | 北京字节跳动网络技术有限公司 | 视频动作识别方法和装置 |
Non-Patent Citations (1)
Title |
---|
GONG, DINGXI: "Action Recognition Method Based on Sparse Auto-Combination Spatio-Temporal Convolutional Neural Network and Its MapReduce Implementation)", CHINA MASTER'S THESES FULL-TEXT DATABASE, INFORMATION SCIENCE AND TECHNOLOGY, no. 08, 15 August 2014 (2014-08-15), pages 1 - 93, XP055775095 * |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2022259575A1 (ja) * | 2021-06-08 | 2022-12-15 | エヌ・ティ・ティ・コミュニケーションズ株式会社 | 学習装置、推論装置、学習方法、推論方法、及びプログラム |
CN116824641A (zh) * | 2023-08-29 | 2023-09-29 | 卡奥斯工业智能研究院(青岛)有限公司 | 姿态分类方法、装置、设备和计算机存储介质 |
CN116824641B (zh) * | 2023-08-29 | 2024-01-09 | 卡奥斯工业智能研究院(青岛)有限公司 | 姿态分类方法、装置、设备和计算机存储介质 |
Also Published As
Publication number | Publication date |
---|---|
JP2021536048A (ja) | 2021-12-23 |
JP7090183B2 (ja) | 2022-06-23 |
US20210103733A1 (en) | 2021-04-08 |
CN112241673B (zh) | 2022-11-22 |
TW202105202A (zh) | 2021-02-01 |
KR20210090238A (ko) | 2021-07-19 |
TWI738172B (zh) | 2021-09-01 |
SG11202011781UA (en) | 2021-02-25 |
CN112241673A (zh) | 2021-01-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2021012564A1 (zh) | 视频处理方法及装置、电子设备和存储介质 | |
WO2020224457A1 (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110348537B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110659640B (zh) | 文本序列的识别方法及装置、电子设备和存储介质 | |
WO2021196401A1 (zh) | 图像重建方法及装置、电子设备和存储介质 | |
US20210019562A1 (en) | Image processing method and apparatus and storage medium | |
TWI773945B (zh) | 錨點確定方法、電子設備和儲存介質 | |
US20210248718A1 (en) | Image processing method and apparatus, electronic device and storage medium | |
WO2020199704A1 (zh) | 文本识别 | |
WO2021017358A1 (zh) | 位姿确定方法及装置、电子设备和存储介质 | |
WO2021139120A1 (zh) | 网络训练方法及装置、图像生成方法及装置 | |
CN109934275B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN110532956B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109977860B (zh) | 图像处理方法及装置、电子设备和存储介质 | |
CN109145970B (zh) | 基于图像的问答处理方法和装置、电子设备及存储介质 | |
CN111242303A (zh) | 网络训练方法及装置、图像处理方法及装置 | |
CN111582383A (zh) | 属性识别方法及装置、电子设备和存储介质 | |
CN111259967A (zh) | 图像分类及神经网络训练方法、装置、设备及存储介质 | |
WO2022247091A1 (zh) | 人群定位方法及装置、电子设备和存储介质 | |
CN111988622B (zh) | 视频预测方法及装置、电子设备和存储介质 | |
CN111311588A (zh) | 重定位方法及装置、电子设备和存储介质 | |
CN115223018A (zh) | 伪装对象协同检测方法及装置、电子设备和存储介质 | |
CN114973359A (zh) | 表情识别方法及装置、电子设备和存储介质 | |
CN112801116A (zh) | 图像的特征提取方法及装置、电子设备和存储介质 | |
CN113297983A (zh) | 人群定位方法及装置、电子设备和存储介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
ENP | Entry into the national phase |
Ref document number: 2020571778 Country of ref document: JP Kind code of ref document: A |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19938122 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 20217017839 Country of ref document: KR Kind code of ref document: A |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19938122 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19938122 Country of ref document: EP Kind code of ref document: A1 |
|
32PN | Ep: public notification in the ep bulletin as address of the adressee cannot be established |
Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 112(1) EPC (EPO FORM 1205 DATED 24.10.2022) |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19938122 Country of ref document: EP Kind code of ref document: A1 |