WO2023018423A1 - Learning semantic binary embedding for video representations - Google Patents

Learning semantic binary embedding for video representations Download PDF

Info

Publication number
WO2023018423A1
WO2023018423A1 PCT/US2021/046010 US2021046010W WO2023018423A1 WO 2023018423 A1 WO2023018423 A1 WO 2023018423A1 US 2021046010 W US2021046010 W US 2021046010W WO 2023018423 A1 WO2023018423 A1 WO 2023018423A1
Authority
WO
WIPO (PCT)
Prior art keywords
video
clip
embedding
descriptors
model
Prior art date
Application number
PCT/US2021/046010
Other languages
French (fr)
Inventor
Jenhao Hsiao
Original Assignee
Innopeak Technology, Inc.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Innopeak Technology, Inc. filed Critical Innopeak Technology, Inc.
Priority to PCT/US2021/046010 priority Critical patent/WO2023018423A1/en
Publication of WO2023018423A1 publication Critical patent/WO2023018423A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/503Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding involving temporal prediction
    • H04N19/51Motion estimation or motion compensation
    • H04N19/577Motion compensation with bidirectional frame interpolation, i.e. using B-pictures
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/044Recurrent networks, e.g. Hopfield networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • G06N3/0455Auto-encoder networks; Encoder-decoder networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/0464Convolutional networks [CNN, ConvNet]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • G06V10/443Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components by matching or filtering
    • G06V10/449Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters
    • G06V10/451Biologically inspired filters, e.g. difference of Gaussians [DoG] or Gabor filters with interaction between the filter responses, e.g. cortical complex cells
    • G06V10/454Integrating the filters into a hierarchical structure, e.g. convolutional neural networks [CNN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/764Arrangements for image or video recognition or understanding using pattern recognition or machine learning using classification, e.g. of video objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/82Arrangements for image or video recognition or understanding using pattern recognition or machine learning using neural networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Definitions

  • This application relates generally to data processing technology including, but not limited to, methods, systems, and non-transitory computer-readable media for generating video embeddings that represent video content.
  • Personal photo albums and online content sharing platforms contain a large number of multimedia content items that are often associated with content labels describing the content items.
  • the content items can be classified, retrieved, searched, sorted, or recommended efficiently using these content labels, which thereby facilitate understanding, organization, and search of video content in many applications and make the video content more accessible to users.
  • video embeddings that incorporate rich visual knowledge can be established for video content, thereby facilitating a variety of downstream tasks such as video classification and search.
  • Video embeddings are generated to have substantially small embedding sizes while keeping rich visual, semantic, and relational knowledge of the video content. Specifically, the video embeddings take into account bi-directional relationships among video segments of the video content and are established based on binary representations of the video content.
  • Such video embedding are results of a more accurate and efficient method for generating video embeddings than current practice, which relies on learning general-purpose video representations from large training videos that often neglect structure information of videos (e.g., inter-clip relationships of videos) and requires a large amount of computation and storage resources for real number based operations.
  • the video embeddings having the substantially small embedding sizes can be easily deployed in real-world products and mobile devices that have limited resources.
  • a method is implemented at an electronic device for generating video embeddings.
  • the method includes obtaining video content including a plurality of successive video segments, and generating (e.g., by one or more convolutional neural networks) a plurality of clip descriptors for the plurality of successive video segments of the video content using a clip feature extraction model.
  • Each of the plurality of video segments corresponds to a respective one of the plurality of clip descriptors.
  • the method further includes fusing the plurality of clip descriptors using a bi-directional attention network to generate a plurality of global descriptors.
  • Each of the plurality of clip descriptors corresponds to a respective one of the plurality of global descriptors.
  • the method further includes pooling the plurality of global descriptors to a video embedding (e.g., a floating vector, a continuous embedding vector) for the video content using an adaptive pooling model and converting the video embedding to a binary representation of the video content using an encoder.
  • a video embedding e.g., a floating vector, a continuous embedding vector
  • the binary representation includes a plurality of elements, and each of the plurality of elements is an integer number in a predefined binary range.
  • the binary representation of the video content retains semantic information of the video embedding.
  • some implementations include an electronic device that includes one or more processors and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform any of the above methods.
  • some implementations include a non-transitory computer- readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform any of the above methods.
  • Figure 1 is an example data processing environment having one or more servers communicatively coupled to one or more client devices, in accordance with some embodiments.
  • Figure 2 is a block diagram illustrating a data processing system, in accordance with some embodiments.
  • Figure 3 is an example data processing environment for training and applying a neural network based (NN-based) data processing model for processing visual and/or audio data, in accordance with some embodiments.
  • Figure 4A is an example neural network (NN) applied to process content data in an NN-based data processing model, in accordance with some embodiments.
  • Figure 4B is an example node in the neural network (NN), in accordance with some embodiments.
  • Figure 5 is a diagram of video content that includes a plurality of video frames that are grouped into video segments, in accordance with some embodiments.
  • Figure 6A is a block diagram of a deep learning model that includes neural networks for generating video embeddings and is modified for training, in accordance with some embodiments.
  • Figure 6B is a block diagram of a bi-directional self-attention neural network of the deep learning model shown in Figure 6A, in accordance with some embodiments.
  • Figure 6C is a block diagram of a deep learning model configured to output a plurality of losses during a training process, in accordance with some embodiments.
  • Figure 7 is a flowchart of a method for generating video embeddings, in accordance with some embodiments.
  • Learned video embeddings can be useful in facilitating a variety of downstream tasks, such as video search, classification, and organization.
  • existing methods treat videos as an aggregation of independent frames that can be indexed and combined, thus neglecting information regarding the structure of the video, such as global temporal consistency and inter-clip relationships.
  • current methods often assume that video embeddings are real-valued and continuous, which makes generating such video embeddings computationally expensive and requires large storage or memory footprint to store the generated video embeddings. These disadvantages hinder the use of video embeddings in real-world products, such deployment in mobile user devices such as smart phones and tablets.
  • This application is directed to generating video embeddings for video content.
  • the generated video embeddings have substantially small embedding sizes while keeping rich visual, semantic, and relational knowledge of the video content.
  • the method described herein presents a technical solution to maintaining the visual, semantic, and relational knowledge of the video content by taking into account bi-directional relationships among video segments of the video content, and the method provides a technical solution in generating substantially small embedding sizes by generating video embeddings that are established based on binary representations of the video content.
  • FIG. 1 is an example data processing environment 100 having one or more servers 102 communicatively coupled to one or more client devices 104, in accordance with some embodiments.
  • the one or more client devices 104 may be, for example, desktop computers 104 A, tablet computers 104B, mobile phones 104C, or intelligent, multi-sensing, network-connected home devices (e.g., a camera).
  • Each client device 104 can collect data or user inputs, executes user applications, and present outputs on its user interface.
  • the collected data or user inputs can be processed locally (e.g., for training and/or for prediction) at the client device 104 and/or remotely by the server(s) 102.
  • the one or more servers 102 provides system data (e.g., boot files, operating system images, and user applications) to the client devices 104, and in some embodiments, processes the data and user inputs received from the client device(s) 104 when the user applications are executed on the client devices 104.
  • the data processing environment 100 further includes a storage 106 for storing data related to the servers 102, client devices 104, and applications executed on the client devices 104.
  • storage 106 may store video content for training a machine learning model (e.g., deep learning network) and/or video content obtained by a user to which a trained machine learning model can be applied to determine one or more actions associated with the video content.
  • a machine learning model e.g., deep learning network
  • the one or more servers 102 can enable real-time data communication with the client devices 104 that are remote from each other or from the one or more servers 102. Further, in some embodiments, the one or more servers 102 can implement data processing tasks that cannot be or are preferably not completed locally by the client devices 104.
  • the client devices 104 include a game console that executes an interactive online gaming application. The game console receives a user instruction and sends it to a game server 102 with user data. The game server 102 generates a stream of video data based on the user instruction and user data and providing the stream of video data for display on the game console and other client devices that are engaged in the same game session with the game console.
  • the client devices 104 include a networked surveillance camera and a mobile phone 104C.
  • the networked surveillance camera collects video data and streams the video data to a surveillance camera server 102 in real time. While the video data is optionally pre-processed on the surveillance camera, the surveillance camera server 102 processes the video data to identify motion or audio events in the video data and share information of these events with the mobile phone 104C, thereby allowing a user of the mobile phone 104 to monitor the events occurring near the networked surveillance camera in the real time and remotely.
  • the one or more servers 102, one or more client devices 104, and storage 106 are communicatively coupled to each other via one or more communication networks 108, which are the medium used to provide communications links between these devices and computers connected together within the data processing environment 100.
  • the one or more communication networks 108 may include connections, such as wire, wireless communication links, or fiber optic cables. Examples of the one or more communication networks 108 include local area networks (LAN), wide area networks (WAN) such as the Internet, or a combination thereof.
  • the one or more communication networks 108 are, optionally, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol.
  • a connection to the one or more communication networks 108 may be established either directly (e.g., using 3G/4G connectivity to a wireless carrier), or through a network interface 110 (e.g., a router, switch, gateway, hub, or an intelligent, dedicated whole-home control node), or through any combination thereof.
  • a network interface 110 e.g., a router, switch, gateway, hub, or an intelligent, dedicated whole-home control node
  • the one or more communication networks 108 can represent the Internet of a worldwide collection of networks and gateways that use the Transmission Control Protocol/Intemet Protocol (TCP/IP) suite of protocols to communicate with one another.
  • TCP/IP Transmission Control Protocol/Intemet Protocol
  • At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages.
  • Deep learning techniques are applied in the data processing environment 100 to process content data (e.g., video data, visual data, audio data) obtained by an application executed at a client device 104 to identify information contained in the content data, match the content data with other data, categorize the content data, or synthesize related content data.
  • content data e.g., video data, visual data, audio data
  • data processing models are created based on one or more neural networks to process the content data. These data processing models are trained with training data before they are applied to process the content data.
  • both model training and data processing are implemented locally at each individual client device 104 (e.g., the client device 104C).
  • the client device 104C obtains the training data from the one or more servers 102 or storage 106 and applies the training data to train the data processing models.
  • the client device 104C obtains the content data (e.g., captures video data via an internal camera) and processes the content data using the training data processing models locally.
  • both model training and data processing are implemented remotely at a server 102 (e.g., the server 102A) associated with a client device 104 (e.g. the client device 104A).
  • the server 102A obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models.
  • the client device 104A obtains the content data, sends the content data to the server 102A (e.g., in an application) for data processing using the trained data processing models, receives data processing results from the server 102A, and presents the results on a user interface (e.g., associated with the application).
  • the client device 104 A itself implements no or little data processing on the content data prior to sending them to the server 102A.
  • data processing is implemented locally at a client device 104 (e.g., the client device 104B), while model training is implemented remotely at a server 102 (e.g., the server 102B) associated with the client device 104B.
  • the server 102B obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models.
  • the trained data processing models are optionally stored in the server 102B or storage 106.
  • the client device 104B imports the trained data processing models from the server 102B or storage 106, processes the content data using the data processing models, and generates data processing results to be presented on a user interface locally.
  • FIG. 2 is a block diagram illustrating a data processing system 200, in accordance with some embodiments.
  • the data processing system 200 includes a server 102, a client device 104, a storage 106, or a combination thereof.
  • the data processing system 200 typically, includes one or more processing units (CPUs) 202, one or more network interfaces 204, memory 206, and one or more communication buses 208 for interconnecting these components (sometimes called a chipset).
  • the data processing system 200 includes one or more input devices 210 that facilitate user input, such as a keyboard, a mouse, a voicecommand input unit or microphone, a touch screen display, a touch- sensitive input pad, a gesture capturing camera, or other input buttons or controls.
  • the client device 104 of the data processing system 200 uses a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard.
  • the client device 104 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on the electronic devices.
  • the data processing system 200 also includes one or more output devices 212 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays.
  • the client device 104 includes a location detection device, such as a GPS (global positioning satellite) or other geo-location receiver, for determining the location of the client device 104.
  • GPS global positioning satellite
  • Memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 206, optionally, includes one or more storage devices remotely located from one or more processing units 202. Memory 206, or alternatively the non-volatile memory within memory 206, includes a non-transitory computer readable storage medium. In some embodiments, memory 206, or the non- transitory computer readable storage medium of memory 206, stores the following programs, modules, and data structures, or a subset or superset thereof:
  • Operating system 214 including procedures for handling various basic system services and for performing hardware dependent tasks
  • Network communication module 216 for connecting each server 102 or client device 104 to other devices (e.g., server 102, client device 104, or storage 106) via one or more network interfaces 204 (wired or wireless) and one or more communication networks 108, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; • User interface module 218 for enabling presentation of information (e.g., a graphical user interface for application(s) 224, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at each client device 104 via one or more output devices 212 (e.g., displays, speakers, etc.);
  • output devices 212 e.g., displays, speakers, etc.
  • Input processing module 220 for detecting one or more user inputs or interactions from one of the one or more input devices 210 and interpreting the detected input or interaction;
  • Web browser module 222 for navigating, requesting (e.g., via HTTP), and displaying websites and web pages thereof, including a web interface for logging into a user account associated with a client device 104 or another electronic device, controlling the client or electronic device if associated with the user account, and editing and reviewing settings and data that are associated with the user account;
  • One or more user applications 224 for execution by the data processing system 200 e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling another electronic device and reviewing data captured by such devices;
  • Model training module 226 for receiving training data (.g., training data 238) and establishing a data processing model (e.g., data processing module 228) for processing content data (e.g., video data, visual data, audio data) to be collected or obtained by a client device 104;
  • a data processing model e.g., data processing module 2248 for processing content data (e.g., video data, visual data, audio data) to be collected or obtained by a client device 104;
  • Data processing module 228 for processing content data using data processing models 240, thereby identifying information contained in the content data, matching the content data with other data, categorizing the content data, or synthesizing related content data, where in some embodiments, the data processing module 228 is associated with one of the user applications 224 to process the content data in response to a user instruction received from the user application 224;
  • One or more databases 230 for storing at least data including one or more of o Device settings 232 including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.) of the one or more servers 102 or client devices 104; o User account information 234 for the one or more user applications 224, e.g., user names, security questions, account history data, user preferences, and predefined account settings; o Network parameters 236 for the one or more communication networks 108, e.g., IP address, subnet mask, default gateway, DNS server and host name; o Training data 238 for training one or more data processing models 240; o Data processing model(s) 240 for processing content data (e.g., video data, visual data, audio data) using deep learning techniques; and o Content data and results 242 that are obtained by and outputted to the client device 104 of the data processing system 200, respectively, where the content data is processed by the data processing models 240 locally at the client device 104 or remotely at the server
  • the one or more databases 230 are stored in one of the server 102, client device 104, and storage 106 of the data processing system 200.
  • the one or more databases 230 are distributed in more than one of the server 102, client device 104, and storage 106 of the data processing system 200.
  • more than one copy of the above data is stored at distinct devices, e.g., two copies of the data processing models 240 are stored at the server 102 and storage 106, respectively.
  • Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above.
  • the above identified modules or programs i.e., sets of instructions
  • memory 206 optionally, stores a subset of the modules and data structures identified above.
  • memory 206 optionally, stores additional modules and data structures not described above.
  • FIG. 3 is another example data processing system 300 for training and applying a neural network based (NN-based) data processing model 240 for processing content data (e.g., video data, visual data, audio data), in accordance with some embodiments.
  • the data processing system 300 includes a model training module 226 for establishing the data processing model 240 and a data processing module 228 for processing the content data using the data processing model 240.
  • both of the model training module 226 and the data processing module 228 are located on a client device 104 of the data processing system 300, while a training data source 304 distinct form the client device 104 provides training data 306 to the client device 104.
  • the training data source 304 is optionally a server 102 or storage 106.
  • both of the model training module 226 and the data processing module 228 are located on a server 102 of the data processing system 300.
  • the training data source 304 providing the training data 306 is optionally the server 102 itself, another server 102, or the storage 106.
  • the model training module 226 and the data processing module 228 are separately located on a server 102 and client device 104, and the server 102 provides the trained data processing model 240 to the client device 104.
  • the model training module 226 includes one or more data pre-processing modules 308, a model training engine 310, and a loss control module 312.
  • the data processing model 240 is trained according to a type of the content data to be processed.
  • the training data 306 is consistent with the type of the content data, and a data pre-processing module 308 applied to process the training data 306 consistent with the type of the content data.
  • a video pre-processing module 308 is configured to process video training data 306 to a predefined image format, e.g., group frames (e.g., video frames, visual frames) of the video content into video segments.
  • the video pre-processing module 308 may also extract a region of interest (ROI) in each frame or separate a frame into foreground and background components, and crop each frame to a predefined image size.
  • the model training engine 310 receives pre-processed training data provided by the data preprocessing module(s) 308, further processes the pre-processed training data using an existing data processing model 240, and generates an output from each training data item.
  • the loss control module 312 can monitor a loss function comparing the output associated with the respective training data item and a ground truth of the respective training data item.
  • the model training engine 310 modifies the data processing model 240 to reduce the loss function, until the loss function satisfies a loss criteria (e.g., a comparison result of the loss function is minimized or reduced below a loss threshold).
  • the modified data processing model 240 is provided to the data processing module 228 to process the content data.
  • the model training module 226 offers supervised learning in which the training data is entirely labelled and includes a desired output for each training data item (also called the ground truth in some situations). Conversely, in some embodiments, the model training module 226 offers unsupervised learning in which the training data are not labelled. The model training module 226 is configured to identify previously undetected patterns in the training data without pre-existing labels and with no or little human supervision. Additionally, in some embodiments, the model training module 226 offers partially supervised learning in which the training data are partially labelled. [0033]
  • the data processing module 228 includes a data pre-processing modules 314, a model -based processing module 316, and a data post-processing module 318.
  • the data preprocessing modules 314 pre-processes the content data based on the type of the content data. Functions of the data pre-processing modules 314 are consistent with those of the preprocessing modules 308 and covert the content data to a predefined content format that is acceptable by inputs of the model -based processing module 316.
  • Examples of the content data include one or more of: video data, visual data (e.g., image data), audio data, textual data, and other types of data. For example, each video is pre-processed to group frames in the video into video segments.
  • the model-based processing module 316 applies the trained data processing model 240 provided by the model training module 226 to process the pre- processed content data.
  • the model-based processing module 316 can also monitor an error indicator to determine whether the content data has been properly processed in the data processing model 240.
  • the processed content data is further processed by the data post-processing module 318 to present the processed content data in a preferred format or to provide other related information that can be derived from the processed content data.
  • Figure 4A is an example neural network (NN) 400 applied to process content data in an NN-based data processing model 240, in accordance with some embodiments
  • Figure 4B is an example node 420 in the neural network 400, in accordance with some embodiments.
  • the data processing model 240 is established based on the neural network 400.
  • a corresponding model-based processing module 316 applies the data processing model 240 including the neural network 400 to process content data that has been converted to a predefined content format.
  • the neural network 400 includes a collection of nodes 420 that are connected by links 412. Each node 420 receives one or more node inputs and applies a propagation function to generate a node output from the one or more node inputs.
  • the node output is provided via one or more links 412 to one or more other nodes 420
  • a weight w associated with each link 412 is applied to the node output.
  • the one or more node inputs are combined based on corresponding weights wi, W2, W3, and W4 according to the propagation function.
  • the propagation function is a product of a non-linear activation function and a linear weighted combination of the one or more node inputs.
  • the collection of nodes 420 is organized into one or more layers in the neural network 400.
  • the one or more layers includes a single layer acting as both an input layer and an output layer.
  • the one or more layers includes an input layer 402 for receiving inputs, an output layer 406 for providing outputs, and zero or more hidden layers 404 (e.g., 404A and 404B) between the input layer 402 and the output layer 406.
  • a deep neural network has more than one hidden layer 404 between the input layer 402 and the output layer 406.
  • each layer may be only connected with its immediately preceding and/or immediately following layer.
  • a layer 402 or 404B is a fully connected neural network layer because each node 420 in the layer 402 or 404B is connected to every node 420 in its immediately following layer.
  • one of the one or more hidden layers 404 includes two or more nodes that are connected to the same node in its immediately following layer for down sampling or pooling the nodes 420 between these two layers.
  • max pooling uses a maximum value of the two or more nodes in the layer 404B for generating the node of the immediately following layer 406 connected to the two or more nodes.
  • a convolutional neural network is applied in a data processing model 240 to process content data (particularly, video and image data).
  • the CNN employs convolution operations and belongs to a class of deep neural networks 400, i.e., a feedforward neural network that moves data forward from the input layer 402 through the hidden layers to the output layer 406.
  • the one or more hidden layers of the CNN are convolutional layers convolving with a multiplication or dot product.
  • Each node in a convolutional layer receives inputs from a receptive area associated with a previous layer (e.g., five nodes), and the receptive area is smaller than the entire previous layer and may vary based on a location of the convolution layer in the convolutional neural network.
  • Video or image data is pre-processed to a predefined video/image format corresponding to the inputs of the CNN.
  • the pre-processed video or image data is abstracted by each layer of the CNN to a respective feature map.
  • a recurrent neural network is applied in the data processing model 240 to process content data (particularly, visual data and audio data).
  • Nodes in successive layers of the RNN follow a temporal sequence, such that the RNN exhibits a temporal dynamic behavior.
  • each node 420 of the RNN has a time-varying real-valued activation.
  • the RNN examples include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor.
  • LSTM long short-term memory
  • BAM bidirectional associative memory
  • an echo state network an independently RNN (IndRNN)
  • a recursive neural network a recursive neural network
  • a neural history compressor examples include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor.
  • the RNN can be used for hand
  • the training process is a process for calibrating all of the weights w, for each layer of the learning model using a training data set which is provided in the input layer 402.
  • the training process typically includes two steps, forward propagation and backward propagation, which are repeated multiple times until a predefined convergence condition is satisfied.
  • forward propagation the set of weights for different layers are applied to the input data and intermediate results from the previous layers.
  • backward propagation a margin of error of the output (e.g., a loss function) is measured, and the weights are adjusted accordingly to decrease the error.
  • the activation function is optionally linear, rectified linear unit, sigmoid, hyperbolic tangent, or of other types.
  • a network bias term b is added to the sum of the weighted outputs from the previous layer before the activation function is applied.
  • the network bias b provides a perturbation that helps neural network 400 avoid over fitting the training data.
  • the result of the training includes the network bias parameter b for each layer.
  • FIG. 5 is a diagram of video content 500 that includes a plurality of successive video frames 512 grouped into video segments 502 (also called video clips 502), in accordance with some embodiments.
  • the video frames 512 of the video content 500 are grouped into n number of video segments 502 (e.g., video segments 502-1 to 502-n).
  • the video frames 512 are grouped such that each video segment 502 includes a respective number of successive video frames 512.
  • the respective numbers are optionally constant or distinct from each other among the video segments 502.
  • two consecutive video segments 502 overlap and share one or more video frames 512.
  • two consecutive video segments 502 are separated by no video frames 512 or by a limited number of frames 512.
  • each video segment 502 has 16 successive video frames 512 (e.g., video frames 512-1 through 512-6).
  • the video content 500 includes n number of video segments 502, i.e., at least 16 times n number of video frames 512.
  • Each video segment 502 corresponds to a predetermined time duration that is equal to 16 times of a time duration of each video frame 512. For example, when the video content 500 has a video frame rate of 60 frames per second (fps), the frame time of each video frame 512 is equal to 16.7 milliseconds, and the predetermined time duration corresponding to each video segment 502 is approximately equal to 267 milliseconds.
  • the video frames 512 of the video content 500 are grouped into the video segments 502, and the video segments 502 are provided as inputs to a trained deep learning model as input data to determine one or more video embeddings for the video content 500.
  • video segment is also called “video clip”.
  • clip-based feature elements are generated for each of the video segments 502.
  • the clip-based feature elements are fused to generate global descriptors, and the global descriptors are further pooled to determine a video embedding of the video content 500.
  • the video embedding is compressed into a binary representation that retains semantic information of the video embedding while having a smaller size than the video embedding.
  • FIG. 6A is a block diagram of a deep learning model 600 that includes neural networks for generating video embeddings, in accordance with some embodiments.
  • the deep learning model 600 includes at least a clip feature extraction model 650, a bidirectional self-attention model 654, an adaptive pooling model 656, and an encoder 658.
  • Each of the models 650 and 654 includes one or more respective neural networks.
  • the adaptive pooling model 656 optionally includes a fully connected neural network layer.
  • the encoder 658 is an encoder network that includes one or more neural networks and a fully connected encoding layer that is also part of an encoding-decoding model.
  • Figure 6B is a block diagram of an example bidirectional self-attention model 654 of the deep learning model 600 shown in Figure 6A, in accordance with some embodiments
  • Figure 6C is a block diagram of a deep learning model 600 configured to output a plurality of losses during a training process, in accordance with some embodiments.
  • the deep learning model 600 receives video content 500 that includes a plurality of video segments 502 (e.g., video segments 502-1 through 502-m).
  • the video segments are grouped from video frames of the video content 500 and provided to the deep learning model 600 (e.g., to the clip feature extraction model 650) as inputs.
  • the video frames of the video content 500 is grouped into m number of video segments 502.
  • the clip feature extraction model 650 receives the video segments 502 and generates (e.g., outputs, determines, provides) a respective clip descriptor 612 for each video segment 502.
  • the respective clip description 612 includes a feature vector including a plurality of feature elements.
  • the clip feature extraction model 650 In response to receiving m number of video segments 502, the clip feature extraction model 650 outputs m number of clip descriptors 612.
  • the clip feature extraction model 650 includes a plurality of three-dimensional (3D) convoluted neural networks (3D CNNs). For example, a first 3D CNN of the clip feature extraction model 650 receives a video segment 502-1 as an input and generates a clip descriptor 612-1 as an output corresponding to the video segment 502-1, and a second 3D CNN of the clip feature extraction model 650 receives a video segment 502-2 as an input and generates a clip descriptor 612-2 as an output corresponding to the video segment 502-2.
  • 3D CNNs three-dimensional
  • Each clip descriptor 612 generated by the clip feature extraction model 650 is provided as an input to a positional encoder 652.
  • the positional encoder 652 encodes each clip descriptor 612 with information regarding its temporal position relative to the other clip descriptors 612 to form a position encoded clip descriptor 614 (e.g., position encoded clip descriptors 614-1 through 626-m).
  • a position encoded clip descriptor 614-2 is a combination of information from the clip descriptor 612-2 and positional information indicating a temporal position of the video segment 502-2 to which the encoded clip descriptor 614-2 corresponds.
  • the positional information indicates that the video segment 502-2 is temporally positioned between the video segments 502-1 and 502-3, and the clip descriptor 612-2 is also temporally positioned between the clip descriptors 612-1 and 612-3 to which the video segments 502-1 and 502-3 correspond, respectively.
  • the positional encoder 652 encodes each clip descriptor 612 with information regarding a relative or absolute temporal position of the corresponding video segment 502 in the video content 500.
  • a positional encoding (PE) term is defined as: where pos is a temporal position of a corresponding video segment 502, 2i denotes an 2z-th element in a pos -th clip descriptor 612, 2i+l denotes an (2i+ 7)-th element in a /w.s-th clip descriptor 612, and D is a dimension of a clip descriptor 612.
  • Each clip descriptor 612 has the same dimension with the respective PE term, i.e., includes D numbers of feature elements.
  • the respective PE is superimposed with each feature element v of the respective clip descriptor 612 to form a respective position-encoded clip descriptor 614 (also called order-encoded vector o) as follows:
  • Each of the position encoded clip descriptors 614 are provided to a bidirectional self-attention model 654.
  • the bidirectional self-attention model 654 receives the position encoded clip descriptors 614 (e.g., position encoded clip descriptors 614-1 through 614-m) as inputs and bidirectionally fuses the position encoded clip descriptors 614 to form a plurality of global descriptors 616 (e.g., a plurality of global descriptors 616-1 through 616-m).
  • each element of the vector corresponds to a particular global descriptor 616.
  • gi corresponds to global descriptor 616-1
  • g2 corresponds to global descriptor 616-2
  • g m corresponds to global descriptor 616-m.
  • the bidirectional self-attention model 654 includes an input layer 640-1 that receives the position encoded clip descriptors 614, and an output layer 640-q that generates the global descriptors 616.
  • the bidirectional self-attention model 654 may include at least two layers, i.e., at least the input layer 640-1 and the output layer 640-g.
  • the bidirectional self-attention model 654 does not include any intermediate layer, and the output layer 640-q is configured to fuse the encoded clip descriptors 614 bidirectionally.
  • the bidirectional self-attention model 654 includes one or more intermediate layers (e.g., layers 640-2 through 640-(q-l)) configured to at least partially fuse the encoded clip descriptors 614 in a bidirectional manner.
  • the bidirectional self-attention model 654 includes q-2 number of intermediate layers (e.g., has 0, 1 or more intermediate layers).
  • each of the output layer 640-g and the one or more intermediate layers (if any) includes a fully connected neural network layer.
  • the bidirectional self-attention model 654 includes a plurality of neural networks.
  • a first layer 640-1 of a first neural network of the bidirectional self-attention model 654 receives first subset (e.g., in some cases, a subset, less than all) of the position encoded clip descriptors 614 as inputs and generates a global descriptor 616-1 as an output
  • first subset e.g., in some cases, a subset, less than all
  • second subset e.g., in some cases, a subset, less than all
  • the first subset of the position encoded clip descriptors 614 is different from the second subset of the position encoded clip descriptors 614.
  • the first subset of the position encoded clip descriptors 614 differs from the second subset of the position encoded clip descriptors 614 by at least one position encoded clip descriptor 614.
  • the first subset of the position encoded clip descriptors 614 may include the position encoded clip descriptors 614-1 and 614-2
  • the second subset of the position encoded clip descriptors 614 may include the position encoded clip descriptors 614-1, 614-2, and 614-3.
  • the bidirectional self-attention model 654 utilizes the positional information encoded in the position encoded clip descriptors 614 by fusing the position encoded clip descriptors 614 (e.g., order-encoded vectors (o), defined in equation (3)) in both directions.
  • the bidirectional self-attention model 654 is formulated as: where i indicates the index of the target output temporal position, and j denotes temporal positions of all related clip descriptors. N(o) is a normalization term, and f(ot) is a linear weight projection inside the self-attention mechanism.
  • the function s(oi,Oj) denotes a similarity between two corresponding position encoded clip descriptors and Oj and is defined as: where and are linear projections.
  • the functions and are learnabl e functions that are trained to project a position encoded clip descriptor 614 to a space where the attention mechanism works efficiently.
  • the outputs of the functions and are defined as value, query, and key, respectively.
  • the function PF in equation (4) includes a position-wise feed-forward network that is applied to determine each global descriptor 616.
  • the position-wise feed-forward network (PF) is defined as: where U is a Gaussian Error Linear Unit (GELU) activation function, ⁇ and are transformation matrices, and E and b are biases.
  • GELU Gaussian Error Linear Unit
  • the adaptive pooling model 656 receives the global descriptors 616 (e.g., global descriptors 616-1 through 616-m) from the output layer 640-q of the bi-directional self-attention model 654 and determines (e.g., generates) the video embedding 618 of the video content 500 based on the global descriptors 616.
  • the adaptive pooling model 656 adaptively pools the global descriptors 616 based on their significance to generate the video embedding 618 (e.g., final video-level embedding).
  • the video embedding 618 includes a continuous embedding vector and is generated from the adaptive pooling module 656 as follows: where is the video embedding 618, G is a corresponding global descriptor, and r>(G) is a gating module represented as a function of the global descriptor G.
  • the gating module is established based on a sigmoid function as follows: wherein and are transformation matrices, G is the corresponding global descriptor, and OGELU is a GELU activation function.
  • the video embedding 618 is a floating point vector in which each element of the video embedding is a respective floating number within the floating point vector.
  • the encoder 658 receives the video embedding 618 as an input and transforms the video embedding 618 (e.g., a continuous embedding vector) into a binary representation 620 (e.g., a binary latent vector, a binary video hash, hash vector).
  • the binary representation 620 is a hash vector in which each element of the binary representation 620 is a respective integer number in a predefined binary range.
  • the encoder 658 extracts important features from the video embedding 618 using a matrix operation followed by a binarization step, and the binary representation 620 is defined as: where is the generated binary representation 620, F is the video embedding 618, is a weight matrix for the video embedding 618, is a factor, and a is a bias value that must be within the predefined binary range of the binary representation 620. For example, when the predefined binary range is 0 and 1 (e.g., an element in the binary representation 620 has a value of 0 or 1), the value of a is a value between 0 and 1 (e.g., 0.5).
  • the sigmoid function is a binarization function.
  • the binary representation 620 retains the semantic information of the video embedding 618 while having a reduced size compared to the video embedding 618, e.g., the binary representation 620 has a smaller storage requirement compared to a storage requirement of the video embedding 618.
  • the video embedding 618 is a floating point vector in which each element of the video embedding 618 is a floating number having a first number of bits.
  • the binary representation 620 is a hash vector in which each element of the binary representation 620 is an integer number in a predefined binary range, and the integer number in the binary representation 620 has a second number of bits that is fewer than the first number of bits.
  • the predefined binary range corresponds to TV bits and is [0, 2 N -1], where N is any integer (e.g., 2, 3, 4, etc.), and each binary representation is equal to 0, 1, . . ., or 2 N -1.
  • N is determined based on a resolution requirement, e.g., a predefined resolution requirement, a resolution requirement of the deep learning models 600.
  • the binary representation 620 can be used in place of the video embedding 618 (while retaining the semantic information from the video embedding 618), such as on devices where space and computational resources are limited.
  • the binary representation 620 of the video embedding 618 is applied to label or classify video content 500 associated with a specific application. It is estimated that a number of different representations are required to fully cover all the video content 500 to be classified in the specific application. The number of different representations corresponds to a full range of the binary representation 620, and determines the predefined binary range [0, 2N-1] for each element of the binary representation 620. As such, the binary representation 620 is highly compact and much smaller than the video embedding 618 in size.
  • the deep learning model 600 includes a decoder 660 and a video action classification layer 662 for the purposes of model training.
  • the decoder 660 includes one or more neural networks, e.g., a fully connected neural network, is configured to decode the binary representation 620 to a reconstructed video embedding 622.
  • the video action classification layer 662 optionally includes a fully connected neural network layer, and is configured to classify the video content based on the reconstructed video embedding 622 to a video classification 624.
  • the video action classification layer 662 utilizes the reconstructed video embedding 622 to group video instances having a similar concept (e.g., in the same video category) to guide a learning process of the deep learning model 600.
  • each training data content 630 for training the neural networks includes a plurality of successive video frames 632 that are grouped into video segments 634 (e.g., video segments 634-1 through 634-m) provided to the clip feature extraction model 650 of the deep learning model 600 as inputs.
  • video segments 634 e.g., video segments 634-1 through 634-m
  • the video frames of the video content 630 are grouped into m number of video segments 634 and provided to the clip feature extraction model 650 as training data.
  • the neural networks 699 shown in Figure 6C represent the clip feature extraction model 650, the bi-directional selfattention model 654, and the adaptive pooling model 656.
  • the decoder 660 receives the binary representation 620 outputted from the encoder 658 and generates a reconstructed video embedding 622 that can be compared to the original video embedding 618 output from the adaptive pooling model 656 to calculate a reconstruction loss 690 that is determined by a reconstruction loss function.
  • the reconstruction loss 690 is used to train (e.g., optimize, improve) the encoder 658 (and the decoder 660).
  • the reconstruction loss function is defined as: where L rec is the reconstruction loss 690, is the video embedding 618 output from the adaptive pooling model 656, is the reconstructed video embedding 622, and is a vector dimension of the video embedding 618 applied to normalize a difference between and
  • the encoder 658 and decoder 660 are part of an encoder-decoder model in which the encoder 658 and decoder 660 can be considered to be two different layers, e.g., two fully connected neural network layers.
  • the encoder 658 and decoder 660 are trained jointly based at least in part on the calculated reconstruction loss 690.
  • the encoder 658 and the decoder 660 of the encoder-decoder model are trained by minimizing the reconstruction loss function shown in equation (10). That said, the values of the weight and the bias in equation (9) are varied to minimize the reconstruction loss 690.
  • the reconstruction loss 690 is used as feedback to train both the encoder 658 and the decoder 660 to improve transformation, by the encoder 658, of the video embedding 618 into the binary representation 620 and ensure that the binary representation 620 is filled with rich semantics.
  • training the encoder 658 of the encoder-decoder model includes adjusting the value of any of the weight and the factor in equation (9) to minimize the reconstruction loss 690.
  • a semantic loss 692 is calculated using a semantic loss function that compares the binary representation 620 to the video embedding 618.
  • the video embedding 618 and the binary representation 620 are compared by comparing a cosine similarity between the video embeddings 618 in the continuous embedding space to a Hamming Distance between the binary representations 620 corresponding to the video embeddings 618 in the binary space.
  • the video embeddings 618 are represented by and the binary representations 620 are represented by respectively It is preferable the video embeddings 618 to have a relationship in the continuous embedding space that is inversely related to a relationship between the binary representations 620 in the binary space.
  • a Hamming distance between and smaper a Hamming Distance between and comparison of the distances in the continuous embedding space
  • an indicator term defined as: where depicts the cosine similarity in the continuous embedding space, and the indicator term, is equal to 1 and -1 in all other cases.
  • the semantic loss function is defined as: where is the semantic loss 692, and dhash is a function that depicts the Hamming Distance in the binary space.
  • the calculated semantic loss 692 is used as a feedback to train the encoder 658 and the decoder 660 to generate a binary representation 620 that retains the semantic similarity from the original video embeddings 618.
  • at least one of the relationships between the video embeddings 618 in the continuous embedding space is inversely related to the corresponding relationships between the binary representations 620 in the binary space.
  • the semantic loss 692 is zero.
  • none of the relationships between the video embeddings 618 in the continuous embedding space is inversely related to the corresponding relationships between the binary representations 620 in the binary space.
  • the semantic loss 692 is a positive number.
  • the encoder 658 and decoder 660 are retrained, and the values of the weight I/F binary a nd the bias b binary in equation (9) are changed.
  • training the encoder 658 and the decoder 660 includes adjusting values of the elements of the weight or factor in equation (9) to minimize the semantic loss 692.
  • a video classification loss 694 can be calculated using a video classification loss function 694 that compares a video classification 624 to a ground truth video classification 602.
  • the ground truth video classification 602 optionally includes a ground truth label associated with the video content 500.
  • a video action classification layer 662 e.g., a fully connected neural network layer receives the reconstructed video embedding 622 provided by the decoder 660, and determines the video classification 624 based on the received reconstructed video embedding 622.
  • the video classification 624 is defined as: where y ’ is the predicted video classification 624, is the reconstructed video embedding 622, and F c iass is a linear layer that receives reconstructed video embedding 622.
  • the video classification loss 694 is calculated using a cross-entropy function, CE, that compares the predicted video classification 624 (y ’) to the ground truth video classification 602 (y).
  • the video classification loss function is defined as: where LCE is the video classification loss 694.
  • a global loss 696 is calculated using a global loss function (also called global objective function) that is based on a weighted combination of the reconstruction loss 690, the semantic loss 692, and the video classification loss 694.
  • a global loss function also called global objective function
  • the global loss function is defined as: where L is the global loss 6 96, is the reconstruction loss 690, L sem is the semantic loss 692, is the video classification loss 694, and , and are weighting parameters for the reconstruction loss 690, the semantic loss 692, and the video classification loss 694, respectively.
  • the global loss 696 balances the deep learning model’s ability to retain the information stored in the continuous real valued video embedding 618.
  • the deep learning model 600 can be trained by minimizing the global loss 696 as shown in equation (15). For example, the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 of the deep learning model 600 are modified until the global loss 696 satisfies a predefined training criterion.
  • the deep learning model 600 is trained end-to end such that the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 are trained jointly using any of the reconstruction loss 690, semantic loss 692, video classification loss 694, and global loss 696.
  • each of the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 is trained individually.
  • a subset and less than all of the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 is trained jointly.
  • the deep learning model 600 is trained in a server 102, and provided to and implemented on a client device 104 (e.g., a mobile phone).
  • training the bi-directional self-attention model 654 includes adjusting the value of any of in equation (6) to minimize any of the video classification loss 694 and the global loss 696.
  • adaptive pooling model 656 includes adjusting the value of any of and in equation (8) to minimize any of the video classification loss 694 and the global loss 696.
  • training the encoder 658 includes adjusting the value of any of the weight and factor in equation (9) to minimize any of the reconstruction loss 690, the semantic loss 692, the video classification loss 694, and the global loss 696.
  • FIG. 7 is a flowchart of a method 700 for generating video embeddings, in accordance with some embodiments.
  • the method 700 is implemented by a computer system (e.g., a client device 104, a server 102, or a combination thereof).
  • An example of the client device 104 is a mobile phone.
  • the method 700 is applied to generate a binary representation 620 for the video content 500 based on video segments 502 of the video content 500.
  • the video content item 500 may be, for example, captured by a first electronic device (e.g., a surveillance camera or a personal device), and streamed to a server 102 (e.g., for storage at storage 106 or a database associated with the server 102) to be labeled and classified.
  • a first electronic device e.g., a surveillance camera or a personal device
  • server 102 e.g., for storage at storage 106 or a database associated with the server 102
  • the server associates the video content item 500 with a binary representation and provides the video content item 500 with the binary representation 620 to one or more second electronic devices that are distinct from or include the first electronic device.
  • a deep learning model 600 used to implement the method 700 is trained at the server 102 and provided to a client device 104 that applies the deep learning model locally to determine the binary representation 620 for one or more video content items 500 obtained or captured by the client device 104.
  • Method 700 is, optionally, governed by instructions that are stored in a non- transitory computer readable storage medium and that are executed by one or more processors of the computer system.
  • Each of the operations shown in Figure 7 may correspond to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 of the system 200 in Figure 2).
  • the computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices.
  • the instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors.
  • the computer system obtains (710) video content item 500 that includes a plurality of successive video segments 502, and generates (720) a plurality of clip descriptors 612 for the plurality of successive video segments 502 of the video content 500 using a clip feature extraction model 650.
  • Each of the plurality of successive video segments 502 corresponds to a respective one of the clip descriptors 612.
  • the computer system fuses (730) the plurality of clip descriptors 612 using a bidirectional attention network (e.g., bidirectional self-attention model 654) to generate a plurality of global descriptors 616.
  • a bidirectional attention network e.g., bidirectional self-attention model 654
  • the computer system pools (740) the plurality of global descriptors 616 to form a video embedding 618 (e.g., a floating vector, a continuous floating vector) for the video content 500 using an adaptive pooling model 656.
  • the computer system converts (750) the video embedding 618 to a binary representation 620 of the video content 500 using an encoder 658.
  • the binary representation 620 includes a plurality of elements, and each of the plurality of elements is an integer number in a predefined binary range.
  • the binary representation 620 of the video content 500 retains semantic information from the video embedding 618.
  • the video embedding 618 is a floating point vector in which each element of the video embedding 618 is a floating number having a first number of bits, and the integer number of the each of the plurality of elements of the binary representation 620 has a second number of bits that is fewer than the first number of bits (e.g., the binary representation is a vector that has a smaller storage requirement relative to a storage requirement of the video embedding 618).
  • the binary representation is a vector that has a smaller storage requirement relative to a storage requirement of the video embedding 618.
  • an element of the floating point vector has 32 bits
  • a corresponding element of the binary representation 620 has 2 bits, e.g., is equal to 0, 1, 2, or 3.
  • the encoder 658 includes a fully connected layer.
  • the computer system further encodes the video embedding 618 to a plurality of video features using the fully connected layer, and the computer system converts each of the plurality of video features to a respective element of the binary representation 620.
  • the predefined binary range corresponds to N bits and is [0, 2 N -1], with N being an integer, and each element of the binary representation 620 is equal to 0, 1, ..., or 2 N -1.
  • N can be any integer number greater than 1.
  • N is determined based on a resolution requirement of the binary representation 620 of the video content 500.
  • the bi-directional attention network 654 combines each intermediate descriptor bidirectionally with a first subset of the plurality of clip descriptors that precede the intermediate clip descriptor and a second subset of the plurality of clip descriptors that follow the intermediate clip descriptor to generate an intermediate global descriptor that is one of the plurality of global descriptors.
  • the intermediate global descriptor is not a first global descriptor or a last global descriptor of the entire video content 500.
  • the bi-directional attention network 654 bidirectionally combines a second clip descriptor 612-2 with the clip descriptor 612-1 (e.g., the first subset of clip descriptors) and 612-3 (e.g., the second subset of clip descriptors).
  • the second subset of clip descriptors also includes other clip descriptors, such as any of clip descriptors 612-4, 612-5, ... 612-m).
  • the clip feature extraction model includes a plurality of three-dimensional (3D) convolutional neural networks (3D-CNNs), and each of the plurality of 3D convolutional neural networks is configured to generate one of the plurality of clip descriptors 612 for a corresponding one of the plurality of video segments 502 of the video content 500.
  • 3D-CNNs three-dimensional convolutional neural networks
  • the computer system encodes each of the plurality of clip descriptors 614 with positional information, shown in equations (1) and (2).
  • the positional information indicates a temporal position of one of the plurality of video segments 502 in the video content 500 corresponding to the one of the plurality of clip descriptors 614.
  • the computer system generates a positional encoding term (PE) for each clip descriptor (e.g., each feature element). The positional encoding term is described above with respect to equations (1) and (2).
  • PE positional encoding term
  • the computer system reconstructs a reconstructed video embedding 622 from the binary representation 620 using a decoder 660.
  • the computer system also trains at least a subset of the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 based on a reconstruction loss function that compares the reconstructed video embedding 622 to the video embedding 618.
  • at least a subset of the deep learning model 600 are trained to minimize the reconstruction loss function.
  • the computer system reconstructs a reconstructed video embedding 622 from the binary representation 620 of the video content 500 using a decoder 660.
  • the computer system trains at least a subset of the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 using a semantic loss function that compares the reconstructed video embedding 622 to the video embedding.
  • the semantic loss has: a first value (e.g., zero) indicating that the reconstructed video embedding 622 and the binary representation 620 of the video content 500 are similar, or a second value (e.g., a nonzero value) indicating that the reconstructed video embedding 622 and the binary representation 620 of the video content 500 are not similar. Similarity between the reconstructed video embedding 622 and the binary representation 620 is determined based on a comparison of a cosine similarity between the video embeddings 618 in the continuous embedding space to a Hamming Distance between the binary representations 620 corresponding to the video embeddings 618 in the binary space as described above with respect to equation (11).
  • a first value e.g., zero
  • a second value e.g., a nonzero value
  • the semantic loss 692 has the second value and the computer system introduces a correction term to at least one of the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658.
  • the computer system determines a video classification 624 of the video content 500 from the reconstructed video embedding 622 using a video classification layer 624 (e.g., a fully connected neural network layer).
  • the computer system trains at least a subset of the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 using a classification loss function that compares the video classification 624 of the video content 500 with a ground truth video classification 602 of the video content 500.
  • the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained in an end-to-end manner.
  • a global loss function is a weighted combination of a reconstruction loss 690 determined by a reconstruction loss function, a semantic loss 692 determined by a semantic loss function, and a video classification loss 694 determined by a classification loss function.
  • At least a subset of the clip feature extraction model 650, the bi- directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 are trained using the global loss function (e.g., based on the global loss 696).
  • the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained on a server 102 and provided to an electronic device 104 that is distinct from the server 102.
  • the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained on a server 102.
  • the server 102 receives the video content 500 from an electronic device 104 that is distinct from the server 102, and the server 102 provides the binary representation 620 of the video content 500 to the electronic device 104.
  • the computer system train the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 on an electronic device 104, and the binary representation 620 of the video content 500 is determined by the electronic device 104.
  • the plurality of successive video segments 502 includes a first video segment 502-1 and a second video segment 502-2, and a portion, less than all, of the second video segment 502-2 overlaps with the first video segment 502-1.
  • the first video segment 502-1 and second video segment 502-2 shares one or more video frames 512 (e.g., includes at least one video frame 512 in common with each other).
  • each video segment of the plurality of successive video segments 502 is distinct from and non-overlapping with any other segments of the plurality of successive video segments.
  • the first video segment 502-1 and a second video segment 502-2 do not share a common video frame 512.
  • the video content 500 includes a plurality of frames 512, and each video segment of the plurality of successive video segments 502 includes at least a subset of the plurality of frames 512.
  • the video content 500 includes a first number of video frames 512, and every second number of video frames is grouped into a respective video segment 502. For example, if a video content 500 includes a total of 300 video frames 512 for a 10 second video, and each video segment 502 includes 16 video frames 512.
  • Each global descriptor (e.g., global descriptor 616) corresponds to approximately 19-20 video segments, or roughly 10 seconds.
  • the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context.
  • the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
  • stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages can be implemented in hardware, firmware, software or any combination thereof.

Abstract

This application is directed to generation of a video embedding for video content that includes a plurality of successive video segments. An electronic device generates a plurality of clip descriptors for the plurality of successive video segments of the video content using a clip feature extraction model. Each of the plurality of successive video segments corresponds to a respective clip descriptor. The electronic device fuses the clip descriptors using a bi-directional attention network to generate a plurality of global descriptors. Each of the plurality of clip descriptors corresponds to a respective one of the plurality of global descriptors. The electronic device pools the global descriptors to the video embedding for the video content using an adaptive pooling model and converts the video embedding to a binary representation of the video content using an encoder. The binary representation includes a plurality of elements, and each of the plurality of elements is an integer number in a predefined binary range.

Description

Learning Semantic Binary Embedding for Video Representations
TECHNICAL FIELD
[0001] This application relates generally to data processing technology including, but not limited to, methods, systems, and non-transitory computer-readable media for generating video embeddings that represent video content.
BACKGROUND
[0002] Personal photo albums and online content sharing platforms contain a large number of multimedia content items that are often associated with content labels describing the content items. The content items can be classified, retrieved, searched, sorted, or recommended efficiently using these content labels, which thereby facilitate understanding, organization, and search of video content in many applications and make the video content more accessible to users. Thus, it is important that video embeddings that incorporate rich visual knowledge can be established for video content, thereby facilitating a variety of downstream tasks such as video classification and search.
SUMMARY
[0003] This application is directed to generating video embeddings for video content. Video embeddings are generated to have substantially small embedding sizes while keeping rich visual, semantic, and relational knowledge of the video content. Specifically, the video embeddings take into account bi-directional relationships among video segments of the video content and are established based on binary representations of the video content. Such video embedding are results of a more accurate and efficient method for generating video embeddings than current practice, which relies on learning general-purpose video representations from large training videos that often neglect structure information of videos (e.g., inter-clip relationships of videos) and requires a large amount of computation and storage resources for real number based operations. As such, the video embeddings having the substantially small embedding sizes can be easily deployed in real-world products and mobile devices that have limited resources.
[0004] In one aspect, a method is implemented at an electronic device for generating video embeddings. The method includes obtaining video content including a plurality of successive video segments, and generating (e.g., by one or more convolutional neural networks) a plurality of clip descriptors for the plurality of successive video segments of the video content using a clip feature extraction model. Each of the plurality of video segments corresponds to a respective one of the plurality of clip descriptors. The method further includes fusing the plurality of clip descriptors using a bi-directional attention network to generate a plurality of global descriptors. Each of the plurality of clip descriptors corresponds to a respective one of the plurality of global descriptors. The method further includes pooling the plurality of global descriptors to a video embedding (e.g., a floating vector, a continuous embedding vector) for the video content using an adaptive pooling model and converting the video embedding to a binary representation of the video content using an encoder. The binary representation includes a plurality of elements, and each of the plurality of elements is an integer number in a predefined binary range. The binary representation of the video content retains semantic information of the video embedding.
[0005] In another aspect, some implementations include an electronic device that includes one or more processors and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform any of the above methods.
[0006] In yet another aspect, some implementations include a non-transitory computer- readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform any of the above methods.
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] For a better understanding of the various described implementations, reference should be made to the Detailed Description below, in conjunction with the following drawings in which like reference numerals refer to corresponding parts throughout the figures.
[0008] Figure 1 is an example data processing environment having one or more servers communicatively coupled to one or more client devices, in accordance with some embodiments.
[0009] Figure 2 is a block diagram illustrating a data processing system, in accordance with some embodiments.
[0010] Figure 3 is an example data processing environment for training and applying a neural network based (NN-based) data processing model for processing visual and/or audio data, in accordance with some embodiments. [0011] Figure 4A is an example neural network (NN) applied to process content data in an NN-based data processing model, in accordance with some embodiments.
[0012] Figure 4B is an example node in the neural network (NN), in accordance with some embodiments.
[0013] Figure 5 is a diagram of video content that includes a plurality of video frames that are grouped into video segments, in accordance with some embodiments.
[0014] Figure 6A is a block diagram of a deep learning model that includes neural networks for generating video embeddings and is modified for training, in accordance with some embodiments.
[0015] Figure 6B is a block diagram of a bi-directional self-attention neural network of the deep learning model shown in Figure 6A, in accordance with some embodiments.
[0016] Figure 6C is a block diagram of a deep learning model configured to output a plurality of losses during a training process, in accordance with some embodiments.
[0017] Figure 7 is a flowchart of a method for generating video embeddings, in accordance with some embodiments.
[0018] Like reference numerals refer to corresponding parts throughout the several views of the drawings.
DETAILED DESCRIPTION
[0019] Reference will now be made in detail to specific embodiments, examples of which are illustrated in the accompanying drawings. In the following detailed description, numerous non-limiting specific details are set forth in order to assist in understanding the subject matter presented herein. But it will be apparent to one of ordinary skill in the art that various alternatives may be used without departing from the scope of claims and the subject matter may be practiced without these specific details. For example, it will be apparent to one of ordinary skill in the art that the subject matter presented herein can be implemented on many types of electronic devices with digital video capabilities.
[0020] Learned video embeddings can be useful in facilitating a variety of downstream tasks, such as video search, classification, and organization. However, existing methods treat videos as an aggregation of independent frames that can be indexed and combined, thus neglecting information regarding the structure of the video, such as global temporal consistency and inter-clip relationships. Additionally, current methods often assume that video embeddings are real-valued and continuous, which makes generating such video embeddings computationally expensive and requires large storage or memory footprint to store the generated video embeddings. These disadvantages hinder the use of video embeddings in real-world products, such deployment in mobile user devices such as smart phones and tablets.
[0021] Thus, it is desirable to have an improved method of generating video embeddings. This application is directed to generating video embeddings for video content. The generated video embeddings have substantially small embedding sizes while keeping rich visual, semantic, and relational knowledge of the video content. The method described herein presents a technical solution to maintaining the visual, semantic, and relational knowledge of the video content by taking into account bi-directional relationships among video segments of the video content, and the method provides a technical solution in generating substantially small embedding sizes by generating video embeddings that are established based on binary representations of the video content.
[0022] Figure 1 is an example data processing environment 100 having one or more servers 102 communicatively coupled to one or more client devices 104, in accordance with some embodiments. The one or more client devices 104 may be, for example, desktop computers 104 A, tablet computers 104B, mobile phones 104C, or intelligent, multi-sensing, network-connected home devices (e.g., a camera). Each client device 104 can collect data or user inputs, executes user applications, and present outputs on its user interface. The collected data or user inputs can be processed locally (e.g., for training and/or for prediction) at the client device 104 and/or remotely by the server(s) 102. The one or more servers 102 provides system data (e.g., boot files, operating system images, and user applications) to the client devices 104, and in some embodiments, processes the data and user inputs received from the client device(s) 104 when the user applications are executed on the client devices 104. In some embodiments, the data processing environment 100 further includes a storage 106 for storing data related to the servers 102, client devices 104, and applications executed on the client devices 104. For example, storage 106 may store video content for training a machine learning model (e.g., deep learning network) and/or video content obtained by a user to which a trained machine learning model can be applied to determine one or more actions associated with the video content.
[0023] The one or more servers 102 can enable real-time data communication with the client devices 104 that are remote from each other or from the one or more servers 102. Further, in some embodiments, the one or more servers 102 can implement data processing tasks that cannot be or are preferably not completed locally by the client devices 104. For example, the client devices 104 include a game console that executes an interactive online gaming application. The game console receives a user instruction and sends it to a game server 102 with user data. The game server 102 generates a stream of video data based on the user instruction and user data and providing the stream of video data for display on the game console and other client devices that are engaged in the same game session with the game console. In another example, the client devices 104 include a networked surveillance camera and a mobile phone 104C. The networked surveillance camera collects video data and streams the video data to a surveillance camera server 102 in real time. While the video data is optionally pre-processed on the surveillance camera, the surveillance camera server 102 processes the video data to identify motion or audio events in the video data and share information of these events with the mobile phone 104C, thereby allowing a user of the mobile phone 104 to monitor the events occurring near the networked surveillance camera in the real time and remotely.
[0024] The one or more servers 102, one or more client devices 104, and storage 106 are communicatively coupled to each other via one or more communication networks 108, which are the medium used to provide communications links between these devices and computers connected together within the data processing environment 100. The one or more communication networks 108 may include connections, such as wire, wireless communication links, or fiber optic cables. Examples of the one or more communication networks 108 include local area networks (LAN), wide area networks (WAN) such as the Internet, or a combination thereof. The one or more communication networks 108 are, optionally, implemented using any known network protocol, including various wired or wireless protocols, such as Ethernet, Universal Serial Bus (USB), FIREWIRE, Long Term Evolution (LTE), Global System for Mobile Communications (GSM), Enhanced Data GSM Environment (EDGE), code division multiple access (CDMA), time division multiple access (TDMA), Bluetooth, Wi-Fi, voice over Internet Protocol (VoIP), Wi-MAX, or any other suitable communication protocol. A connection to the one or more communication networks 108 may be established either directly (e.g., using 3G/4G connectivity to a wireless carrier), or through a network interface 110 (e.g., a router, switch, gateway, hub, or an intelligent, dedicated whole-home control node), or through any combination thereof. As such, the one or more communication networks 108 can represent the Internet of a worldwide collection of networks and gateways that use the Transmission Control Protocol/Intemet Protocol (TCP/IP) suite of protocols to communicate with one another. At the heart of the Internet is a backbone of high-speed data communication lines between major nodes or host computers, consisting of thousands of commercial, governmental, educational and other computer systems that route data and messages.
[0025] Deep learning techniques are applied in the data processing environment 100 to process content data (e.g., video data, visual data, audio data) obtained by an application executed at a client device 104 to identify information contained in the content data, match the content data with other data, categorize the content data, or synthesize related content data. In these deep learning techniques, data processing models are created based on one or more neural networks to process the content data. These data processing models are trained with training data before they are applied to process the content data. In some embodiments, both model training and data processing are implemented locally at each individual client device 104 (e.g., the client device 104C). The client device 104C obtains the training data from the one or more servers 102 or storage 106 and applies the training data to train the data processing models. Subsequently to model training, the client device 104C obtains the content data (e.g., captures video data via an internal camera) and processes the content data using the training data processing models locally. Alternatively, in some embodiments, both model training and data processing are implemented remotely at a server 102 (e.g., the server 102A) associated with a client device 104 (e.g. the client device 104A). The server 102A obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models. The client device 104A obtains the content data, sends the content data to the server 102A (e.g., in an application) for data processing using the trained data processing models, receives data processing results from the server 102A, and presents the results on a user interface (e.g., associated with the application). The client device 104 A itself implements no or little data processing on the content data prior to sending them to the server 102A. Additionally, in some embodiments, data processing is implemented locally at a client device 104 (e.g., the client device 104B), while model training is implemented remotely at a server 102 (e.g., the server 102B) associated with the client device 104B. The server 102B obtains the training data from itself, another server 102 or the storage 106 and applies the training data to train the data processing models. The trained data processing models are optionally stored in the server 102B or storage 106. The client device 104B imports the trained data processing models from the server 102B or storage 106, processes the content data using the data processing models, and generates data processing results to be presented on a user interface locally.
[0026] Figure 2 is a block diagram illustrating a data processing system 200, in accordance with some embodiments. The data processing system 200 includes a server 102, a client device 104, a storage 106, or a combination thereof. The data processing system 200, typically, includes one or more processing units (CPUs) 202, one or more network interfaces 204, memory 206, and one or more communication buses 208 for interconnecting these components (sometimes called a chipset). The data processing system 200 includes one or more input devices 210 that facilitate user input, such as a keyboard, a mouse, a voicecommand input unit or microphone, a touch screen display, a touch- sensitive input pad, a gesture capturing camera, or other input buttons or controls. Furthermore, in some embodiments, the client device 104 of the data processing system 200 uses a microphone and voice recognition or a camera and gesture recognition to supplement or replace the keyboard. In some embodiments, the client device 104 includes one or more cameras, scanners, or photo sensor units for capturing images, for example, of graphic serial codes printed on the electronic devices. The data processing system 200 also includes one or more output devices 212 that enable presentation of user interfaces and display content, including one or more speakers and/or one or more visual displays. Optionally, the client device 104 includes a location detection device, such as a GPS (global positioning satellite) or other geo-location receiver, for determining the location of the client device 104.
[0027] Memory 206 includes high-speed random access memory, such as DRAM, SRAM, DDR RAM, or other random access solid state memory devices; and, optionally, includes non-volatile memory, such as one or more magnetic disk storage devices, one or more optical disk storage devices, one or more flash memory devices, or one or more other non-volatile solid state storage devices. Memory 206, optionally, includes one or more storage devices remotely located from one or more processing units 202. Memory 206, or alternatively the non-volatile memory within memory 206, includes a non-transitory computer readable storage medium. In some embodiments, memory 206, or the non- transitory computer readable storage medium of memory 206, stores the following programs, modules, and data structures, or a subset or superset thereof:
• Operating system 214 including procedures for handling various basic system services and for performing hardware dependent tasks;
• Network communication module 216 for connecting each server 102 or client device 104 to other devices (e.g., server 102, client device 104, or storage 106) via one or more network interfaces 204 (wired or wireless) and one or more communication networks 108, such as the Internet, other wide area networks, local area networks, metropolitan area networks, and so on; • User interface module 218 for enabling presentation of information (e.g., a graphical user interface for application(s) 224, widgets, websites and web pages thereof, and/or games, audio and/or video content, text, etc.) at each client device 104 via one or more output devices 212 (e.g., displays, speakers, etc.);
• Input processing module 220 for detecting one or more user inputs or interactions from one of the one or more input devices 210 and interpreting the detected input or interaction;
• Web browser module 222 for navigating, requesting (e.g., via HTTP), and displaying websites and web pages thereof, including a web interface for logging into a user account associated with a client device 104 or another electronic device, controlling the client or electronic device if associated with the user account, and editing and reviewing settings and data that are associated with the user account;
• One or more user applications 224 for execution by the data processing system 200 (e.g., games, social network applications, smart home applications, and/or other web or non-web based applications for controlling another electronic device and reviewing data captured by such devices);
• Model training module 226 for receiving training data (.g., training data 238) and establishing a data processing model (e.g., data processing module 228) for processing content data (e.g., video data, visual data, audio data) to be collected or obtained by a client device 104;
• Data processing module 228 for processing content data using data processing models 240, thereby identifying information contained in the content data, matching the content data with other data, categorizing the content data, or synthesizing related content data, where in some embodiments, the data processing module 228 is associated with one of the user applications 224 to process the content data in response to a user instruction received from the user application 224;
• One or more databases 230 for storing at least data including one or more of o Device settings 232 including common device settings (e.g., service tier, device model, storage capacity, processing capabilities, communication capabilities, etc.) of the one or more servers 102 or client devices 104; o User account information 234 for the one or more user applications 224, e.g., user names, security questions, account history data, user preferences, and predefined account settings; o Network parameters 236 for the one or more communication networks 108, e.g., IP address, subnet mask, default gateway, DNS server and host name; o Training data 238 for training one or more data processing models 240; o Data processing model(s) 240 for processing content data (e.g., video data, visual data, audio data) using deep learning techniques; and o Content data and results 242 that are obtained by and outputted to the client device 104 of the data processing system 200, respectively, where the content data is processed by the data processing models 240 locally at the client device 104 or remotely at the server 102 to provide the associated results 242 to be presented on client device 104.
[0028] Optionally, the one or more databases 230 are stored in one of the server 102, client device 104, and storage 106 of the data processing system 200. Optionally, the one or more databases 230 are distributed in more than one of the server 102, client device 104, and storage 106 of the data processing system 200. In some embodiments, more than one copy of the above data is stored at distinct devices, e.g., two copies of the data processing models 240 are stored at the server 102 and storage 106, respectively.
[0029] Each of the above identified elements may be stored in one or more of the previously mentioned memory devices, and corresponds to a set of instructions for performing a function described above. The above identified modules or programs (i.e., sets of instructions) need not be implemented as separate software programs, procedures, modules or data structures, and thus various subsets of these modules may be combined or otherwise re-arranged in various embodiments. In some embodiments, memory 206, optionally, stores a subset of the modules and data structures identified above. Furthermore, memory 206, optionally, stores additional modules and data structures not described above.
[0030] Figure 3 is another example data processing system 300 for training and applying a neural network based (NN-based) data processing model 240 for processing content data (e.g., video data, visual data, audio data), in accordance with some embodiments. The data processing system 300 includes a model training module 226 for establishing the data processing model 240 and a data processing module 228 for processing the content data using the data processing model 240. In some embodiments, both of the model training module 226 and the data processing module 228 are located on a client device 104 of the data processing system 300, while a training data source 304 distinct form the client device 104 provides training data 306 to the client device 104. The training data source 304 is optionally a server 102 or storage 106. Alternatively, in some embodiments, both of the model training module 226 and the data processing module 228 are located on a server 102 of the data processing system 300. The training data source 304 providing the training data 306 is optionally the server 102 itself, another server 102, or the storage 106. Additionally, in some embodiments, the model training module 226 and the data processing module 228 are separately located on a server 102 and client device 104, and the server 102 provides the trained data processing model 240 to the client device 104.
[0031] The model training module 226 includes one or more data pre-processing modules 308, a model training engine 310, and a loss control module 312. The data processing model 240 is trained according to a type of the content data to be processed. The training data 306 is consistent with the type of the content data, and a data pre-processing module 308 applied to process the training data 306 consistent with the type of the content data. For example, a video pre-processing module 308 is configured to process video training data 306 to a predefined image format, e.g., group frames (e.g., video frames, visual frames) of the video content into video segments. In another example, the video pre-processing module 308 may also extract a region of interest (ROI) in each frame or separate a frame into foreground and background components, and crop each frame to a predefined image size. The model training engine 310 receives pre-processed training data provided by the data preprocessing module(s) 308, further processes the pre-processed training data using an existing data processing model 240, and generates an output from each training data item. During this course, the loss control module 312 can monitor a loss function comparing the output associated with the respective training data item and a ground truth of the respective training data item. The model training engine 310 modifies the data processing model 240 to reduce the loss function, until the loss function satisfies a loss criteria (e.g., a comparison result of the loss function is minimized or reduced below a loss threshold). The modified data processing model 240 is provided to the data processing module 228 to process the content data.
[0032] In some embodiments, the model training module 226 offers supervised learning in which the training data is entirely labelled and includes a desired output for each training data item (also called the ground truth in some situations). Conversely, in some embodiments, the model training module 226 offers unsupervised learning in which the training data are not labelled. The model training module 226 is configured to identify previously undetected patterns in the training data without pre-existing labels and with no or little human supervision. Additionally, in some embodiments, the model training module 226 offers partially supervised learning in which the training data are partially labelled. [0033] The data processing module 228 includes a data pre-processing modules 314, a model -based processing module 316, and a data post-processing module 318. The data preprocessing modules 314 pre-processes the content data based on the type of the content data. Functions of the data pre-processing modules 314 are consistent with those of the preprocessing modules 308 and covert the content data to a predefined content format that is acceptable by inputs of the model -based processing module 316. Examples of the content data include one or more of: video data, visual data (e.g., image data), audio data, textual data, and other types of data. For example, each video is pre-processed to group frames in the video into video segments. The model-based processing module 316 applies the trained data processing model 240 provided by the model training module 226 to process the pre- processed content data. The model-based processing module 316 can also monitor an error indicator to determine whether the content data has been properly processed in the data processing model 240. In some embodiments, the processed content data is further processed by the data post-processing module 318 to present the processed content data in a preferred format or to provide other related information that can be derived from the processed content data.
[0034] Figure 4A is an example neural network (NN) 400 applied to process content data in an NN-based data processing model 240, in accordance with some embodiments, and Figure 4B is an example node 420 in the neural network 400, in accordance with some embodiments. The data processing model 240 is established based on the neural network 400. A corresponding model-based processing module 316 applies the data processing model 240 including the neural network 400 to process content data that has been converted to a predefined content format. The neural network 400 includes a collection of nodes 420 that are connected by links 412. Each node 420 receives one or more node inputs and applies a propagation function to generate a node output from the one or more node inputs. As the node output is provided via one or more links 412 to one or more other nodes 420, a weight w associated with each link 412 is applied to the node output. Likewise, the one or more node inputs are combined based on corresponding weights wi, W2, W3, and W4 according to the propagation function. In an example, the propagation function is a product of a non-linear activation function and a linear weighted combination of the one or more node inputs.
[0035] The collection of nodes 420 is organized into one or more layers in the neural network 400. Optionally, the one or more layers includes a single layer acting as both an input layer and an output layer. Optionally, the one or more layers includes an input layer 402 for receiving inputs, an output layer 406 for providing outputs, and zero or more hidden layers 404 (e.g., 404A and 404B) between the input layer 402 and the output layer 406. A deep neural network has more than one hidden layer 404 between the input layer 402 and the output layer 406. In the neural network 400, each layer may be only connected with its immediately preceding and/or immediately following layer. In some embodiments, a layer 402 or 404B is a fully connected neural network layer because each node 420 in the layer 402 or 404B is connected to every node 420 in its immediately following layer. In some embodiments, one of the one or more hidden layers 404 includes two or more nodes that are connected to the same node in its immediately following layer for down sampling or pooling the nodes 420 between these two layers. Particularly, max pooling uses a maximum value of the two or more nodes in the layer 404B for generating the node of the immediately following layer 406 connected to the two or more nodes.
[0036] In some embodiments, a convolutional neural network (CNN) is applied in a data processing model 240 to process content data (particularly, video and image data). The CNN employs convolution operations and belongs to a class of deep neural networks 400, i.e., a feedforward neural network that moves data forward from the input layer 402 through the hidden layers to the output layer 406. The one or more hidden layers of the CNN are convolutional layers convolving with a multiplication or dot product. Each node in a convolutional layer receives inputs from a receptive area associated with a previous layer (e.g., five nodes), and the receptive area is smaller than the entire previous layer and may vary based on a location of the convolution layer in the convolutional neural network. Video or image data is pre-processed to a predefined video/image format corresponding to the inputs of the CNN. The pre-processed video or image data is abstracted by each layer of the CNN to a respective feature map. By these means, video and image data can be processed by the CNN for video and image recognition, classification, analysis, imprinting, or synthesis. [0037] Alternatively and additionally, in some embodiments, a recurrent neural network (RNN) is applied in the data processing model 240 to process content data (particularly, visual data and audio data). Nodes in successive layers of the RNN follow a temporal sequence, such that the RNN exhibits a temporal dynamic behavior. In an example, each node 420 of the RNN has a time-varying real-valued activation. Examples of the RNN include, but are not limited to, a long short-term memory (LSTM) network, a fully recurrent network, an Elman network, a Jordan network, a Hopfield network, a bidirectional associative memory (BAM network), an echo state network, an independently RNN (IndRNN), a recursive neural network, and a neural history compressor. In some embodiments, the RNN can be used for handwriting or speech recognition. It is noted that in some embodiments, two or more types of content data are processed by the data processing module 228, and two or more types of neural networks (e.g., both CNN and RNN) are applied to process the content data jointly.
[0038] The training process is a process for calibrating all of the weights w, for each layer of the learning model using a training data set which is provided in the input layer 402. The training process typically includes two steps, forward propagation and backward propagation, which are repeated multiple times until a predefined convergence condition is satisfied. In the forward propagation, the set of weights for different layers are applied to the input data and intermediate results from the previous layers. In the backward propagation, a margin of error of the output (e.g., a loss function) is measured, and the weights are adjusted accordingly to decrease the error. The activation function is optionally linear, rectified linear unit, sigmoid, hyperbolic tangent, or of other types. In some embodiments, a network bias term b is added to the sum of the weighted outputs from the previous layer before the activation function is applied. The network bias b provides a perturbation that helps neural network 400 avoid over fitting the training data. The result of the training includes the network bias parameter b for each layer.
[0039] Figure 5 is a diagram of video content 500 that includes a plurality of successive video frames 512 grouped into video segments 502 (also called video clips 502), in accordance with some embodiments. The video frames 512 of the video content 500 are grouped into n number of video segments 502 (e.g., video segments 502-1 to 502-n). For example, the video frames 512 are grouped such that each video segment 502 includes a respective number of successive video frames 512. The respective numbers are optionally constant or distinct from each other among the video segments 502. Optionally, two consecutive video segments 502 overlap and share one or more video frames 512. Optionally, two consecutive video segments 502 are separated by no video frames 512 or by a limited number of frames 512. In the example shown in Figure 5, each video segment 502 has 16 successive video frames 512 (e.g., video frames 512-1 through 512-6). The video content 500 includes n number of video segments 502, i.e., at least 16 times n number of video frames 512. Each video segment 502 corresponds to a predetermined time duration that is equal to 16 times of a time duration of each video frame 512. For example, when the video content 500 has a video frame rate of 60 frames per second (fps), the frame time of each video frame 512 is equal to 16.7 milliseconds, and the predetermined time duration corresponding to each video segment 502 is approximately equal to 267 milliseconds. [0040] When deep learning techniques are used to identify one or more actions associated with the video content 500, the video frames 512 of the video content 500 are grouped into the video segments 502, and the video segments 502 are provided as inputs to a trained deep learning model as input data to determine one or more video embeddings for the video content 500. In this application, “video segment” is also called “video clip”. In some embodiments, clip-based feature elements are generated for each of the video segments 502. The clip-based feature elements are fused to generate global descriptors, and the global descriptors are further pooled to determine a video embedding of the video content 500. The video embedding is compressed into a binary representation that retains semantic information of the video embedding while having a smaller size than the video embedding.
[0041] Figure 6A is a block diagram of a deep learning model 600 that includes neural networks for generating video embeddings, in accordance with some embodiments. The deep learning model 600 includes at least a clip feature extraction model 650, a bidirectional self-attention model 654, an adaptive pooling model 656, and an encoder 658. Each of the models 650 and 654 includes one or more respective neural networks. The adaptive pooling model 656 optionally includes a fully connected neural network layer. The encoder 658 is an encoder network that includes one or more neural networks and a fully connected encoding layer that is also part of an encoding-decoding model. Figure 6B is a block diagram of an example bidirectional self-attention model 654 of the deep learning model 600 shown in Figure 6A, in accordance with some embodiments, and Figure 6C is a block diagram of a deep learning model 600 configured to output a plurality of losses during a training process, in accordance with some embodiments.
[0042] The deep learning model 600 receives video content 500 that includes a plurality of video segments 502 (e.g., video segments 502-1 through 502-m). The video segments are grouped from video frames of the video content 500 and provided to the deep learning model 600 (e.g., to the clip feature extraction model 650) as inputs. In this example, the video frames of the video content 500 is grouped into m number of video segments 502. The clip feature extraction model 650 receives the video segments 502 and generates (e.g., outputs, determines, provides) a respective clip descriptor 612 for each video segment 502. The respective clip description 612 includes a feature vector including a plurality of feature elements. In response to receiving m number of video segments 502, the clip feature extraction model 650 outputs m number of clip descriptors 612. In some embodiments, the clip feature extraction model 650 includes a plurality of three-dimensional (3D) convoluted neural networks (3D CNNs). For example, a first 3D CNN of the clip feature extraction model 650 receives a video segment 502-1 as an input and generates a clip descriptor 612-1 as an output corresponding to the video segment 502-1, and a second 3D CNN of the clip feature extraction model 650 receives a video segment 502-2 as an input and generates a clip descriptor 612-2 as an output corresponding to the video segment 502-2.
[0043] Each clip descriptor 612 generated by the clip feature extraction model 650 is provided as an input to a positional encoder 652. The positional encoder 652 encodes each clip descriptor 612 with information regarding its temporal position relative to the other clip descriptors 612 to form a position encoded clip descriptor 614 (e.g., position encoded clip descriptors 614-1 through 626-m). For example, a position encoded clip descriptor 614-2 is a combination of information from the clip descriptor 612-2 and positional information indicating a temporal position of the video segment 502-2 to which the encoded clip descriptor 614-2 corresponds. The positional information indicates that the video segment 502-2 is temporally positioned between the video segments 502-1 and 502-3, and the clip descriptor 612-2 is also temporally positioned between the clip descriptors 612-1 and 612-3 to which the video segments 502-1 and 502-3 correspond, respectively.
[0044] In some implementations, the positional encoder 652 encodes each clip descriptor 612 with information regarding a relative or absolute temporal position of the corresponding video segment 502 in the video content 500. For each clip descriptor 612, a positional encoding (PE) term is defined as:
Figure imgf000017_0001
where pos is a temporal position of a corresponding video segment 502, 2i denotes an 2z-th element in a pos -th clip descriptor 612, 2i+l denotes an (2i+ 7)-th element in a /w.s-th clip descriptor 612, and D is a dimension of a clip descriptor 612. Each clip descriptor 612 has the same dimension with the respective PE term, i.e., includes D numbers of feature elements. For each clip descriptor 612, the respective PE is superimposed with each feature element v of the respective clip descriptor 612 to form a respective position-encoded clip descriptor 614 (also called order-encoded vector o) as follows:
Figure imgf000017_0002
[0045] Each of the position encoded clip descriptors 614 (e.g., the position encoded clip descriptors 614-1 through 614-m) are provided to a bidirectional self-attention model 654. The bidirectional self-attention model 654 receives the position encoded clip descriptors 614 (e.g., position encoded clip descriptors 614-1 through 614-m) as inputs and bidirectionally fuses the position encoded clip descriptors 614 to form a plurality of global descriptors 616 (e.g., a plurality of global descriptors 616-1 through 616-m). The global descriptors 616 can be considered as an enhanced clip representation that is represented by the vector, G = {g1, g2, . . . , gm}, where each element of the vector corresponds to a particular global descriptor 616. For example, gi corresponds to global descriptor 616-1, g2 corresponds to global descriptor 616-2, and gm corresponds to global descriptor 616-m.
[0046] Referring to Figure 6B, in some situations, the bidirectional self-attention model 654 includes an input layer 640-1 that receives the position encoded clip descriptors 614, and an output layer 640-q that generates the global descriptors 616. The bidirectional self-attention model 654 may include at least two layers, i.e., at least the input layer 640-1 and the output layer 640-g. In some embodiments, the bidirectional self-attention model 654 does not include any intermediate layer, and the output layer 640-q is configured to fuse the encoded clip descriptors 614 bidirectionally. In some embodiments, the bidirectional self-attention model 654 includes one or more intermediate layers (e.g., layers 640-2 through 640-(q-l)) configured to at least partially fuse the encoded clip descriptors 614 in a bidirectional manner. The bidirectional self-attention model 654 includes q-2 number of intermediate layers (e.g., has 0, 1 or more intermediate layers). In some embodiments, each of the output layer 640-g and the one or more intermediate layers (if any) includes a fully connected neural network layer. In some embodiments, the bidirectional self-attention model 654 includes a plurality of neural networks. For example, a first layer 640-1 of a first neural network of the bidirectional self-attention model 654 receives first subset (e.g., in some cases, a subset, less than all) of the position encoded clip descriptors 614 as inputs and generates a global descriptor 616-1 as an output, and a first layer 640-1 of a second neural network of the bidirectional self-attention model 654 receives second subset (e.g., in some cases, a subset, less than all) of the position encoded clip descriptors 614 as inputs and generates a global descriptor 616-2 as an output. The first subset of the position encoded clip descriptors 614 is different from the second subset of the position encoded clip descriptors 614. In some embodiments, the first subset of the position encoded clip descriptors 614 differs from the second subset of the position encoded clip descriptors 614 by at least one position encoded clip descriptor 614. For example, the first subset of the position encoded clip descriptors 614 may include the position encoded clip descriptors 614-1 and 614-2, and the second subset of the position encoded clip descriptors 614 may include the position encoded clip descriptors 614-1, 614-2, and 614-3.
[0047] The bidirectional self-attention model 654 utilizes the positional information encoded in the position encoded clip descriptors 614 by fusing the position encoded clip descriptors 614 (e.g., order-encoded vectors (o), defined in equation (3)) in both directions. The bidirectional self-attention model 654 is formulated as:
Figure imgf000019_0011
where i indicates the index of the target output temporal position, and j denotes temporal positions of all related clip descriptors. N(o) is a normalization term, and f(ot) is a linear weight projection inside the self-attention mechanism. The function s(oi,Oj) denotes a similarity between two corresponding position encoded clip descriptors and Oj and is defined as:
Figure imgf000019_0001
where and are linear projections. The functions and are learnabl
Figure imgf000019_0016
Figure imgf000019_0015
e
Figure imgf000019_0014
functions that are trained to project a position encoded clip descriptor 614 to a space where the attention mechanism works efficiently. The outputs of the functions
Figure imgf000019_0012
, and
Figure imgf000019_0013
are defined as value, query, and key, respectively. The function PF in equation (4) includes a position-wise feed-forward network that is applied to determine each global descriptor 616. The position-wise feed-forward network (PF) is defined as:
Figure imgf000019_0002
where U is a Gaussian Error Linear Unit (GELU) activation function, ^and
Figure imgf000019_0009
are
Figure imgf000019_0008
transformation matrices, and E and b
Figure imgf000019_0007
are biases.
Figure imgf000019_0006
[0048] Referring to Figure 6A, the adaptive pooling model 656 receives the global descriptors 616 (e.g., global descriptors 616-1 through 616-m) from the output layer 640-q of the bi-directional self-attention model 654 and determines (e.g., generates) the video embedding 618 of the video content 500 based on the global descriptors 616. The adaptive pooling model 656 adaptively pools the global descriptors 616 based on their significance to generate the video embedding 618 (e.g., final video-level embedding). In some embodiments, the video embedding 618 includes a continuous embedding vector and is generated from the adaptive pooling module 656 as follows:
Figure imgf000019_0003
where
Figure imgf000019_0010
is the video embedding 618, G is a corresponding global descriptor, and r>(G) is a gating module represented as a function of the global descriptor G. In an example, the gating module is established based on a sigmoid function as follows:
Figure imgf000019_0004
wherein and are transformation matrices, G is the corresponding global descriptor, and
Figure imgf000019_0005
OGELU is a GELU activation function. [0049] The video embedding 618 is a floating point vector in which each element of the video embedding is a respective floating number within the floating point vector. The encoder 658 receives the video embedding 618 as an input and transforms the video embedding 618 (e.g., a continuous embedding vector) into a binary representation 620 (e.g., a binary latent vector, a binary video hash, hash vector). The binary representation 620 is a hash vector in which each element of the binary representation 620 is a respective integer number in a predefined binary range. The encoder 658 extracts important features from the video embedding 618 using a matrix operation followed by a binarization step, and the binary representation 620 is defined as:
Figure imgf000020_0001
where
Figure imgf000020_0002
is the generated binary representation 620,
Figure imgf000020_0004
F is the video embedding 618, is a weight matrix for the video embedding 618,
Figure imgf000020_0003
is a factor, and a is a bias value that must be within the predefined binary range of the binary representation 620. For example, when the predefined binary range is 0 and 1 (e.g., an element in the binary representation 620 has a value of 0 or 1), the value of a is a value between 0 and 1 (e.g., 0.5). The sigmoid function,
Figure imgf000020_0006
is a binarization function. By these means, each floating number element of the video embedding
Figure imgf000020_0005
(e.g., having 32 bits) is converted to the binary representation 620 of a single bit “0” or “1”, thereby conserving computational and storage resources for using the video embedding 618.
[0050] More specifically, the binary representation 620 retains the semantic information of the video embedding 618 while having a reduced size compared to the video embedding 618, e.g., the binary representation 620 has a smaller storage requirement compared to a storage requirement of the video embedding 618. The video embedding 618 is a floating point vector in which each element of the video embedding 618 is a floating number having a first number of bits. The binary representation 620 is a hash vector in which each element of the binary representation 620 is an integer number in a predefined binary range, and the integer number in the binary representation 620 has a second number of bits that is fewer than the first number of bits. This is because a respective floating number of the video embedding 618 is stored using more bits than a number of bits used to store a respective integer number in the binary representation 620. In some embodiments, the predefined binary range corresponds to TV bits and is [0, 2N-1], where N is any integer (e.g., 2, 3, 4, etc.), and each binary representation is equal to 0, 1, . . ., or 2N-1. In some embodiments, N is determined based on a resolution requirement, e.g., a predefined resolution requirement, a resolution requirement of the deep learning models 600. [0051] When the deep learning model 600 is used to generate a video embedding 618 for video content 500, the video embedding 618 can be used directly to represent the video content 500. Additionally, the binary representation 620 can be used in place of the video embedding 618 (while retaining the semantic information from the video embedding 618), such as on devices where space and computational resources are limited. In some implementations, the binary representation 620 of the video embedding 618 is applied to label or classify video content 500 associated with a specific application. It is estimated that a number of different representations are required to fully cover all the video content 500 to be classified in the specific application. The number of different representations corresponds to a full range of the binary representation 620, and determines the predefined binary range [0, 2N-1] for each element of the binary representation 620. As such, the binary representation 620 is highly compact and much smaller than the video embedding 618 in size.
[0052] In some embodiments, the deep learning model 600 includes a decoder 660 and a video action classification layer 662 for the purposes of model training. The decoder 660 includes one or more neural networks, e.g., a fully connected neural network, is configured to decode the binary representation 620 to a reconstructed video embedding 622. The video action classification layer 662 optionally includes a fully connected neural network layer, and is configured to classify the video content based on the reconstructed video embedding 622 to a video classification 624. Particularly, the video action classification layer 662 utilizes the reconstructed video embedding 622 to group video instances having a similar concept (e.g., in the same video category) to guide a learning process of the deep learning model 600.
[0053] Referring to Figure 6C, during training of the deep learning model 600, each training data content 630 for training the neural networks (e.g., the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, and the encoder 658) includes a plurality of successive video frames 632 that are grouped into video segments 634 (e.g., video segments 634-1 through 634-m) provided to the clip feature extraction model 650 of the deep learning model 600 as inputs. In this example, the video frames of the video content 630 are grouped into m number of video segments 634 and provided to the clip feature extraction model 650 as training data. The neural networks 699 shown in Figure 6C represent the clip feature extraction model 650, the bi-directional selfattention model 654, and the adaptive pooling model 656.
[0054] During training, the decoder 660 receives the binary representation 620 outputted from the encoder 658 and generates a reconstructed video embedding 622 that can be compared to the original video embedding 618 output from the adaptive pooling model 656 to calculate a reconstruction loss 690 that is determined by a reconstruction loss function. The reconstruction loss 690 is used to train (e.g., optimize, improve) the encoder 658 (and the decoder 660). The reconstruction loss function is defined as:
Figure imgf000022_0001
where Lrec is the reconstruction loss 690,
Figure imgf000022_0012
is the video embedding 618 output from the adaptive pooling model 656,
Figure imgf000022_0002
is the reconstructed video embedding 622, and is a
Figure imgf000022_0013
vector dimension of the video embedding 618 applied to normalize a difference between and
[0055] In some embodiments, the encoder 658 and decoder 660 are part of an encoder-decoder model in which the encoder 658 and decoder 660 can be considered to be two different layers, e.g., two fully connected neural network layers. The encoder 658 and decoder 660 are trained jointly based at least in part on the calculated reconstruction loss 690. The encoder 658 and the decoder 660 of the encoder-decoder model are trained by minimizing the reconstruction loss function
Figure imgf000022_0010
shown in equation (10). That said, the values of the weight
Figure imgf000022_0008
and the bias
Figure imgf000022_0009
in equation (9) are varied to minimize the reconstruction loss 690.
[0056] The reconstruction loss 690 is used as feedback to train both the encoder 658 and the decoder 660 to improve transformation, by the encoder 658, of the video embedding 618 into the binary representation 620 and ensure that the binary representation 620 is filled with rich semantics. In some embodiments, training the encoder 658 of the encoder-decoder model includes adjusting the value of any of the weight and the
Figure imgf000022_0006
factor in equation (9) to minimize the reconstruction loss 690.
Figure imgf000022_0011
[0057] Further, in some embodiments, a semantic loss 692 is calculated using a semantic loss function that compares the binary representation 620 to the video embedding 618. In an example, the video embedding 618 and the binary representation 620 are compared by comparing a cosine similarity between the video embeddings 618 in the continuous embedding space to a Hamming Distance between the binary representations 620 corresponding to the video embeddings 618 in the binary space. For a plurality of videos
Figure imgf000022_0004
the video embeddings 618 are represented by and the binary
Figure imgf000022_0005
Figure imgf000022_0007
representations 620 are represented by respectively It is
Figure imgf000022_0003
preferable the video embeddings 618 to have a relationship in the continuous embedding space that is inversely related to a relationship between the binary representations 620 in the binary space. For example, if a cosine similarity between
Figure imgf000023_0003
and is larger than a
Figure imgf000023_0004
cosine similarity between
Figure imgf000023_0005
and
Figure imgf000023_0006
in the continuous embedding space, then it is preferable for a Hamming distance between and smaper than a
Figure imgf000023_0013
Figure imgf000023_0002
Hamming Distance between and comparison of the distances in the
Figure imgf000023_0007
Figure imgf000023_0012
continuous embedding space can be represented by an indicator term,
Figure imgf000023_0014
defined as:
Figure imgf000023_0001
where depicts the cosine similarity in the continuous embedding space, and the indicator term, is equal to 1 and -1 in all other cases. As
Figure imgf000023_0011
such, the semantic loss function is defined as:
Figure imgf000023_0010
where is the semantic loss 692, and dhash is a function that depicts the Hamming Distance in the binary space.
[0058] The calculated semantic loss 692 is used as a feedback to train the encoder 658 and the decoder 660 to generate a binary representation 620 that retains the semantic similarity from the original video embeddings 618. In some situations, at least one of the relationships between the video embeddings 618 in the continuous embedding space is inversely related to the corresponding relationships between the binary representations 620 in the binary space. The semantic loss 692 is zero. In some situations, none of the relationships between the video embeddings 618 in the continuous embedding space is inversely related to the corresponding relationships between the binary representations 620 in the binary space. The semantic loss 692 is a positive number. When the semantic loss 692 is non-zero (e.g., is a positive number), the encoder 658 and decoder 660 are retrained, and the values of the weight I/F binary and the bias bbinary in equation (9) are changed. In some embodiments, training the encoder 658 and the decoder 660 includes adjusting values of the elements of the weight
Figure imgf000023_0009
or factor in equation (9) to minimize the semantic loss 692.
Figure imgf000023_0008
[0059] In some embodiments, a video classification loss 694 can be calculated using a video classification loss function 694 that compares a video classification 624 to a ground truth video classification 602. The ground truth video classification 602 optionally includes a ground truth label associated with the video content 500. A video action classification layer 662 (e.g., a fully connected neural network layer) receives the reconstructed video embedding 622 provided by the decoder 660, and determines the video classification 624 based on the received reconstructed video embedding 622. The video classification 624 is defined as:
Figure imgf000024_0001
where y ’ is the predicted video classification 624,
Figure imgf000024_0006
is the reconstructed video embedding 622, and Fciass is a linear layer that receives reconstructed video embedding 622. The video classification loss 694 is calculated using a cross-entropy function, CE, that compares the predicted video classification 624 (y ’) to the ground truth video classification 602 (y). The video classification loss function is defined as:
Figure imgf000024_0002
where LCE is the video classification loss 694.
[0060] In some implementations, a global loss 696 is calculated using a global loss function (also called global objective function) that is based on a weighted combination of the reconstruction loss 690, the semantic loss 692, and the video classification loss 694.
Specifically, the global loss function is defined as: where L is the global loss 6
Figure imgf000024_0003
96, is the reconstruction loss 690, Lsem is the semantic loss 692, is the video classification loss 694, and
Figure imgf000024_0004
, and
Figure imgf000024_0005
are weighting parameters for the reconstruction loss 690, the semantic loss 692, and the video classification loss 694, respectively.
[0061] The global loss 696 balances the deep learning model’s ability to retain the information stored in the continuous real valued video embedding 618. The deep learning model 600 can be trained by minimizing the global loss 696 as shown in equation (15). For example, the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 of the deep learning model 600 are modified until the global loss 696 satisfies a predefined training criterion.
[0062] In some implementations, the deep learning model 600 is trained end-to end such that the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 are trained jointly using any of the reconstruction loss 690, semantic loss 692, video classification loss 694, and global loss 696. Alternatively, each of the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 is trained individually. Alternatively, a subset and less than all of the clip feature extraction model 650, the bi-directional self-attention model 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 is trained jointly. In some embodiments, the deep learning model 600 is trained in a server 102, and provided to and implemented on a client device 104 (e.g., a mobile phone).
[0063] In some embodiments, training the bi-directional self-attention model 654 includes adjusting the value of any of
Figure imgf000025_0001
in equation (6) to minimize any of the video classification loss 694 and the global loss 696. In some embodiments, adaptive pooling model 656 includes adjusting the value of any of
Figure imgf000025_0002
and
Figure imgf000025_0003
in equation (8) to minimize any of the video classification loss 694 and the global loss 696. In some embodiments, training the encoder 658 includes adjusting the value of any of the weight and factor in equation (9) to minimize any of the reconstruction loss 690, the
Figure imgf000025_0005
Figure imgf000025_0004
semantic loss 692, the video classification loss 694, and the global loss 696.
[0064] Figure 7 is a flowchart of a method 700 for generating video embeddings, in accordance with some embodiments. The method 700 is implemented by a computer system (e.g., a client device 104, a server 102, or a combination thereof). An example of the client device 104 is a mobile phone. In an example, the method 700 is applied to generate a binary representation 620 for the video content 500 based on video segments 502 of the video content 500. The video content item 500 may be, for example, captured by a first electronic device (e.g., a surveillance camera or a personal device), and streamed to a server 102 (e.g., for storage at storage 106 or a database associated with the server 102) to be labeled and classified. The server associates the video content item 500 with a binary representation and provides the video content item 500 with the binary representation 620 to one or more second electronic devices that are distinct from or include the first electronic device. In some embodiments, a deep learning model 600 used to implement the method 700 is trained at the server 102 and provided to a client device 104 that applies the deep learning model locally to determine the binary representation 620 for one or more video content items 500 obtained or captured by the client device 104.
[0065] Method 700 is, optionally, governed by instructions that are stored in a non- transitory computer readable storage medium and that are executed by one or more processors of the computer system. Each of the operations shown in Figure 7 may correspond to instructions stored in a computer memory or non-transitory computer readable storage medium (e.g., memory 206 of the system 200 in Figure 2). The computer readable storage medium may include a magnetic or optical disk storage device, solid state storage devices such as Flash memory, or other non-volatile memory device or devices. The instructions stored on the computer readable storage medium may include one or more of: source code, assembly language code, object code, or other instruction format that is interpreted by one or more processors. Some operations in method 700 may be combined and/or the order of some operations may be changed.
[0066] The computer system obtains (710) video content item 500 that includes a plurality of successive video segments 502, and generates (720) a plurality of clip descriptors 612 for the plurality of successive video segments 502 of the video content 500 using a clip feature extraction model 650. Each of the plurality of successive video segments 502 corresponds to a respective one of the clip descriptors 612. The computer system fuses (730) the plurality of clip descriptors 612 using a bidirectional attention network (e.g., bidirectional self-attention model 654) to generate a plurality of global descriptors 616. Each of the plurality of clip descriptors 612 corresponds to a respective one of the global descriptor 616. The computer system pools (740) the plurality of global descriptors 616 to form a video embedding 618 (e.g., a floating vector, a continuous floating vector) for the video content 500 using an adaptive pooling model 656. The computer system converts (750) the video embedding 618 to a binary representation 620 of the video content 500 using an encoder 658. The binary representation 620 includes a plurality of elements, and each of the plurality of elements is an integer number in a predefined binary range. The binary representation 620 of the video content 500 retains semantic information from the video embedding 618.
[0067] In some embodiments, the video embedding 618 is a floating point vector in which each element of the video embedding 618 is a floating number having a first number of bits, and the integer number of the each of the plurality of elements of the binary representation 620 has a second number of bits that is fewer than the first number of bits (e.g., the binary representation is a vector that has a smaller storage requirement relative to a storage requirement of the video embedding 618). For example, an element of the floating point vector has 32 bits, and a corresponding element of the binary representation 620 has 2 bits, e.g., is equal to 0, 1, 2, or 3.
[0068] In some embodiments, the encoder 658 includes a fully connected layer. To convert the video embedding 618 to the binary representation 620, the computer system further encodes the video embedding 618 to a plurality of video features using the fully connected layer, and the computer system converts each of the plurality of video features to a respective element of the binary representation 620.
[0069] In some embodiments, the predefined binary range corresponds to N bits and is [0, 2N-1], with N being an integer, and each element of the binary representation 620 is equal to 0, 1, ..., or 2N-1. For example, if N is equal to 1, the each of the plurality of elements of the binary representation is either 0 or 1. Also, N can be any integer number greater than 1. In some embodiments, N is determined based on a resolution requirement of the binary representation 620 of the video content 500.
[0070] In some embodiments, the bi-directional attention network 654 combines each intermediate descriptor bidirectionally with a first subset of the plurality of clip descriptors that precede the intermediate clip descriptor and a second subset of the plurality of clip descriptors that follow the intermediate clip descriptor to generate an intermediate global descriptor that is one of the plurality of global descriptors. The intermediate global descriptor is not a first global descriptor or a last global descriptor of the entire video content 500. For example, the bi-directional attention network 654 bidirectionally combines a second clip descriptor 612-2 with the clip descriptor 612-1 (e.g., the first subset of clip descriptors) and 612-3 (e.g., the second subset of clip descriptors). In some embodiments, the second subset of clip descriptors also includes other clip descriptors, such as any of clip descriptors 612-4, 612-5, ... 612-m).
[0071] In some embodiments, the clip feature extraction model includes a plurality of three-dimensional (3D) convolutional neural networks (3D-CNNs), and each of the plurality of 3D convolutional neural networks is configured to generate one of the plurality of clip descriptors 612 for a corresponding one of the plurality of video segments 502 of the video content 500.
[0072] In some embodiments, to generate the plurality of clip descriptors 612, the computer system encodes each of the plurality of clip descriptors 614 with positional information, shown in equations (1) and (2). The positional information indicates a temporal position of one of the plurality of video segments 502 in the video content 500 corresponding to the one of the plurality of clip descriptors 614. In some embodiments, the computer system generates a positional encoding term (PE) for each clip descriptor (e.g., each feature element). The positional encoding term is described above with respect to equations (1) and (2).
[0073] In some embodiments, the computer system reconstructs a reconstructed video embedding 622 from the binary representation 620 using a decoder 660. In some embodiments, the computer system also trains at least a subset of the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 based on a reconstruction loss function that compares the reconstructed video embedding 622 to the video embedding 618. In some embodiments, at least a subset of the deep learning model 600 are trained to minimize the reconstruction loss function.
[0074] In some embodiments, the computer system reconstructs a reconstructed video embedding 622 from the binary representation 620 of the video content 500 using a decoder 660. In some embodiments, the computer system trains at least a subset of the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 using a semantic loss function that compares the reconstructed video embedding 622 to the video embedding. The semantic loss has: a first value (e.g., zero) indicating that the reconstructed video embedding 622 and the binary representation 620 of the video content 500 are similar, or a second value (e.g., a nonzero value) indicating that the reconstructed video embedding 622 and the binary representation 620 of the video content 500 are not similar. Similarity between the reconstructed video embedding 622 and the binary representation 620 is determined based on a comparison of a cosine similarity between the video embeddings 618 in the continuous embedding space to a Hamming Distance between the binary representations 620 corresponding to the video embeddings 618 in the binary space as described above with respect to equation (11). In accordance with the determination that the reconstructed video embedding 622 and the binary representation 620 of the video content 500 are not similar, the semantic loss 692 has the second value and the computer system introduces a correction term to at least one of the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658.
[0075] In some embodiments, the computer system determines a video classification 624 of the video content 500 from the reconstructed video embedding 622 using a video classification layer 624 (e.g., a fully connected neural network layer). The computer system trains at least a subset of the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 using a classification loss function that compares the video classification 624 of the video content 500 with a ground truth video classification 602 of the video content 500. In some embodiments, the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained in an end-to-end manner.
[0076] In some embodiments, a global loss function is a weighted combination of a reconstruction loss 690 determined by a reconstruction loss function, a semantic loss 692 determined by a semantic loss function, and a video classification loss 694 determined by a classification loss function. At least a subset of the clip feature extraction model 650, the bi- directional attention network 654, the adaptive pooling model 656, the encoder 658, and the decoder 660 are trained using the global loss function (e.g., based on the global loss 696).
[0077] In some embodiments, the clip feature extraction model 650, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained on a server 102 and provided to an electronic device 104 that is distinct from the server 102.
[0078] In some embodiments, the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 are trained on a server 102. The server 102 receives the video content 500 from an electronic device 104 that is distinct from the server 102, and the server 102 provides the binary representation 620 of the video content 500 to the electronic device 104.
[0079] In some embodiments, the computer system train the clip feature extraction model, the bi-directional attention network 654, the adaptive pooling model 656, and the encoder 658 on an electronic device 104, and the binary representation 620 of the video content 500 is determined by the electronic device 104.
[0080] In some embodiments, the plurality of successive video segments 502 includes a first video segment 502-1 and a second video segment 502-2, and a portion, less than all, of the second video segment 502-2 overlaps with the first video segment 502-1. For example, the first video segment 502-1 and second video segment 502-2 shares one or more video frames 512 (e.g., includes at least one video frame 512 in common with each other).
[0081] In some embodiments, each video segment of the plurality of successive video segments 502 is distinct from and non-overlapping with any other segments of the plurality of successive video segments. For example, the first video segment 502-1 and a second video segment 502-2 do not share a common video frame 512.
[0082] In some embodiments, the video content 500 includes a plurality of frames 512, and each video segment of the plurality of successive video segments 502 includes at least a subset of the plurality of frames 512. In some embodiments, the video content 500 includes a first number of video frames 512, and every second number of video frames is grouped into a respective video segment 502. For example, if a video content 500 includes a total of 300 video frames 512 for a 10 second video, and each video segment 502 includes 16 video frames 512. Each global descriptor (e.g., global descriptor 616) corresponds to approximately 19-20 video segments, or roughly 10 seconds.
[0083] It should be understood that the particular order in which the operations in Figure 7 have been described are merely exemplary and are not intended to indicate that the described order is the only order in which the operations could be performed. One of ordinary skill in the art would recognize various ways to label video content as described herein. Additionally, it should be noted that details of other processes described above with respect to Figures 6A - 6C are also applicable in an analogous manner to method 700 described above with respect to Figure 7. For brevity, these details are not repeated here.
[0084] The terminology used in the description of the various described implementations herein is for the purpose of describing particular implementations only and is not intended to be limiting. As used in the description of the various described implementations and the appended claims, the singular forms “a”, “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will also be understood that the term “and/or” as used herein refers to and encompasses any and all possible combinations of one or more of the associated listed items. It will be further understood that the terms “includes,” “including,” “comprises,” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof. Additionally, it will be understood that, although the terms “first,” “second,” etc. may be used herein to describe various elements, these elements should not be limited by these terms. These terms are only used to distinguish one element from another.
[0085] As used herein, the term “if’ is, optionally, construed to mean “when” or “upon” or “in response to determining” or “in response to detecting” or “in accordance with a determination that,” depending on the context. Similarly, the phrase “if it is determined” or “if [a stated condition or event] is detected” is, optionally, construed to mean “upon determining” or “in response to determining” or “upon detecting [the stated condition or event]” or “in response to detecting [the stated condition or event]” or “in accordance with a determination that [a stated condition or event] is detected,” depending on the context.
[0086] The foregoing description, for purpose of explanation, has been described with reference to specific embodiments. However, the illustrative discussions above are not intended to be exhaustive or to limit the claims to the precise forms disclosed. Many modifications and variations are possible in view of the above teachings. The embodiments were chosen and described in order to best explain principles of operation and practical applications, to thereby enable others skilled in the art.
[0087] Although various drawings illustrate a number of logical stages in a particular order, stages that are not order dependent may be reordered and other stages may be combined or broken out. While some reordering or other groupings are specifically mentioned, others will be obvious to those of ordinary skill in the art, so the ordering and groupings presented herein are not an exhaustive list of alternatives. Moreover, it should be recognized that the stages can be implemented in hardware, firmware, software or any combination thereof.

Claims

What is claimed is:
1. A method for generating a video embedding, comprising: obtaining video content including a plurality of successive video segments; generating a plurality of clip descriptors for the plurality of successive video segments of the video content using a clip feature extraction model, wherein each of the plurality of successive video segments corresponds to a respective one of the plurality of clip descriptors; fusing the plurality of clip descriptors using a bi-directional attention network to generate a plurality of global descriptors, each of the plurality of clip descriptors corresponding to a respective one of the plurality of global descriptors; pooling the plurality of global descriptors to form the video embedding for the video content using an adaptive pooling model; and converting the video embedding to a binary representation of the video content using an encoder, wherein the binary representation includes a plurality of elements and each of the plurality of elements is an integer number in a predefined binary range.
2. The method of claim 1, wherein: the video embedding is a floating point vector in which each element of the video embedding is a floating number having a first number of bits; and the integer number of the each of the plurality of elements of the binary representation has a second number of bits, the second number being less than the first number.
3. The method of claim 1 or 2, wherein the encoder includes a fully connected layer, converting the video embedding to the binary representation further comprising: encoding the video embedding to a plurality of video features using the fully connected layer; and converting each of the plurality of video features to a respective element of the binary representation.
4. The method of any of the preceding claims, wherein the predefined binary range corresponds to N bits and is [0, 2N-1], N being an integer, and the each of the plurality of elements of the binary representation is equal to 0, 1, ..., or 2N-1.
5. The method of any of the preceding claims, wherein fusing the plurality of clip descriptors using the bi-directional attention network further comprises: bidirectionally combining an intermediate clip descriptor bidirectionally with a first subset of the plurality of clip descriptors that precede the intermediate clip descriptor and a second the plurality of subset of clip descriptors that follow the intermediate clip descriptor to generate a one of the plurality of global descriptors.
6. The method of any of the preceding claims, wherein the clip feature extraction model includes a plurality of three-dimensional (3D) convolutional neural networks, and each of the plurality of 3D convolutional neural networks is configured to generate one of the plurality of clip descriptors for a corresponding one of the plurality of video segments of the video content.
7. The method of any of the preceding claims, wherein generating the plurality of clip descriptors comprises: encoding one of the plurality of clip descriptors with positional information, the positional information indicating a temporal position of one of the plurality of video segments in the video content corresponding to the one of the plurality of clip descriptors.
8. The method of any of the preceding claims, further comprising: reconstructing a reconstructed video embedding from the binary representation using a decoder; and training at least a subset of the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, the encoder, and the decoder based on a reconstruction loss function that compares the reconstructed video embedding to the video embedding.
9. The method of any of claims 1-7, further comprising: reconstructing a reconstructed video embedding from the binary representation of the video content using a decoder; training at least a subset of the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, and the encoder based on a semantic loss that compares the reconstructed video embedding to the video embedding, wherein the semantic loss has: a first value indicating that the reconstructed video embedding and the binary representation of the video content are similar; or a second value indicating that the reconstructed video embedding and the binary representation of the video content are not similar; and in accordance with the semantic loss having the second value, introducing a correction term to at least one of the bi-directional attention network, the adaptive pooling model, and the encoder.
10. The method of any of claims 8-9, further comprising: determining a video classification of the video content from the reconstructed video embedding using an action classification layer; and training at least a subset of the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, and the encoder using a classification loss function that compares the video classification of the video content with a ground truth video classification of the video content.
11. The method of any of claims 1-7, further comprising: determining a global loss function that is a weighted combination of a reconstruction loss determined by a reconstruction loss function, a semantic loss determined by a semantic loss function, and a classification loss determined by a classification loss function; and training at least a subset of the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, and the encoder using the global loss function
12. The method of any of the preceding claims, wherein the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, and the encoder are trained on a server and provided to an electronic device that is distinct from the server.
13. The method of any of claims 1-11, wherein the clip feature extraction model, the bidirectional attention network, the adaptive pooling model, and the encoder are trained on a server, further comprising: receiving by the server the video content from an electronic device that is distinct from the server; and providing by the server the binary representation of the video content to the electronic device.
14. The method of any of claims 1-11, further comprising: training the clip feature extraction model, the bi-directional attention network, the adaptive pooling model, and the encoder on an electronic device, wherein the binary representation of the video content is determined by the electronic device.
15. The method of any of the preceding claims, wherein: the plurality of successive video segments includes a first video segment and a second video segment; and a portion, less than all, of the second video segment overlaps with the first video segment.
16. The method of any of the preceding claims, wherein each video segment of the plurality of successive video segments is distinct from and non-overlapping with any other segments of the plurality of successive video segments.
17. The method of any of the preceding claims, wherein: the video content includes a plurality of frames; and each video segment of the plurality of successive video segments includes at least a subset of the plurality of frames.
18. An electronic device, comprising: one or more processors; and memory having instructions stored thereon, which when executed by the one or more processors cause the processors to perform a method of any of claims 1-17.
19. A non-transitory computer-readable medium, having instructions stored thereon, which when executed by one or more processors cause the processors to perform a method of any of claims 1-17.
PCT/US2021/046010 2021-08-13 2021-08-13 Learning semantic binary embedding for video representations WO2023018423A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/US2021/046010 WO2023018423A1 (en) 2021-08-13 2021-08-13 Learning semantic binary embedding for video representations

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/US2021/046010 WO2023018423A1 (en) 2021-08-13 2021-08-13 Learning semantic binary embedding for video representations

Publications (1)

Publication Number Publication Date
WO2023018423A1 true WO2023018423A1 (en) 2023-02-16

Family

ID=85200868

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2021/046010 WO2023018423A1 (en) 2021-08-13 2021-08-13 Learning semantic binary embedding for video representations

Country Status (1)

Country Link
WO (1) WO2023018423A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300814A1 (en) * 2016-04-13 2017-10-19 Google Inc. Wide and deep machine learning models
US20190149834A1 (en) * 2017-11-15 2019-05-16 Salesforce.Com, Inc. Dense Video Captioning
WO2021092631A9 (en) * 2021-02-26 2021-07-29 Innopeak Technology, Inc. Weakly-supervised text-based video moment retrieval

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170300814A1 (en) * 2016-04-13 2017-10-19 Google Inc. Wide and deep machine learning models
US20190149834A1 (en) * 2017-11-15 2019-05-16 Salesforce.Com, Inc. Dense Video Captioning
WO2021092631A9 (en) * 2021-02-26 2021-07-29 Innopeak Technology, Inc. Weakly-supervised text-based video moment retrieval

Similar Documents

Publication Publication Date Title
Shanthamallu et al. A brief survey of machine learning methods and their sensor and IoT applications
WO2021184026A1 (en) Audio-visual fusion with cross-modal attention for video action recognition
CN111062871B (en) Image processing method and device, computer equipment and readable storage medium
US8818916B2 (en) System and method for linking multimedia data elements to web pages
WO2021081562A2 (en) Multi-head text recognition model for multi-lingual optical character recognition
WO2021092631A2 (en) Weakly-supervised text-based video moment retrieval
US20240037948A1 (en) Method for video moment retrieval, computer system, non-transitory computer-readable medium
WO2021077140A2 (en) Systems and methods for prior knowledge transfer for image inpainting
CN113434716B (en) Cross-modal information retrieval method and device
CN113255625B (en) Video detection method and device, electronic equipment and storage medium
WO2016142285A1 (en) Method and apparatus for image search using sparsifying analysis operators
CN113806588B (en) Method and device for searching video
US11557283B2 (en) Artificial intelligence system for capturing context by dilated self-attention
WO2021195643A1 (en) Pruning compression of convolutional neural networks
WO2023101679A1 (en) Text-image cross-modal retrieval based on virtual word expansion
CN115293348A (en) Pre-training method and device for multi-mode feature extraction network
WO2021178981A1 (en) Hardware-friendly multi-model compression of neural networks
Lin et al. Domestic activities clustering from audio recordings using convolutional capsule autoencoder network
WO2023018423A1 (en) Learning semantic binary embedding for video representations
CN113761282B (en) Video duplicate checking method and device, electronic equipment and storage medium
CN115080699A (en) Cross-modal retrieval method based on modal specific adaptive scaling and attention network
CN112214626B (en) Image recognition method and device, readable storage medium and electronic equipment
WO2023277877A1 (en) 3d semantic plane detection and reconstruction
WO2022250689A1 (en) Progressive video action recognition using scene attributes
WO2023277888A1 (en) Multiple perspective hand tracking

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21953599

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE