CN113591570A - Video processing method and device, electronic equipment and storage medium - Google Patents

Video processing method and device, electronic equipment and storage medium Download PDF

Info

Publication number
CN113591570A
CN113591570A CN202110721821.4A CN202110721821A CN113591570A CN 113591570 A CN113591570 A CN 113591570A CN 202110721821 A CN202110721821 A CN 202110721821A CN 113591570 A CN113591570 A CN 113591570A
Authority
CN
China
Prior art keywords
video
feature
boundary
information
prediction model
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110721821.4A
Other languages
Chinese (zh)
Inventor
吴文灏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202110721821.4A priority Critical patent/CN113591570A/en
Publication of CN113591570A publication Critical patent/CN113591570A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/24Classification techniques
    • G06F18/241Classification techniques relating to the classification model, e.g. parametric or non-parametric approaches
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N20/00Machine learning

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • General Engineering & Computer Science (AREA)
  • Software Systems (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • General Physics & Mathematics (AREA)
  • Artificial Intelligence (AREA)
  • Physics & Mathematics (AREA)
  • Evolutionary Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Medical Informatics (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Computing Systems (AREA)
  • Mathematical Physics (AREA)
  • Information Retrieval, Db Structures And Fs Structures Therefor (AREA)

Abstract

The application discloses a video processing method, a video processing device, electronic equipment and a storage medium, relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and is particularly used in a video analysis scene. The specific implementation scheme is as follows: acquiring a video; performing feature extraction on the video to acquire feature information of the video; calling a boundary prediction model to perform time sequence boundary prediction on the characteristic information so as to generate a time sequence boundary of the video; and segmenting the video according to the time sequence boundary to generate a video segment of the video. Therefore, the accuracy of time sequence boundary prediction can be improved, and the recall rate of time sequence nomination in the un-segmented video is increased.

Description

视频处理方法、装置、电子设备和存储介质Video processing method, apparatus, electronic device and storage medium

技术领域technical field

本申请涉及人工智能技术领域,具体涉及计算机视觉和深度学习技术领域,具体用于视频分析场景下,尤其涉及一种视频处理方法、装置、电子设备和存储介质。The present application relates to the technical field of artificial intelligence, in particular to the technical field of computer vision and deep learning, and is specifically used in video analysis scenarios, and in particular, to a video processing method, apparatus, electronic device and storage medium.

背景技术Background technique

时序动作定位,即输入一段未分割视频,根据视频内容来定位行为片段,包括其开始时间、结束时间,生产的行为片段叫做时序提名(proposal)。时序动作定位是计算机视觉视频理解领域最重要最具有挑战性的问题之一,这归因于其在视频集锦生成,视频推荐,检索等方面的巨大应用潜力。、Time-series action positioning, that is, inputting an unsegmented video, locating the action segment according to the video content, including its start time and end time, and the produced action segment is called a time-series proposal. Temporal action localization is one of the most important and challenging problems in the field of computer vision video understanding, which is attributed to its great application potential in video highlight generation, video recommendation, retrieval, etc. ,

其中,评估时序动作定位方法的一个重要方面是平均召回率。目前的方法大多致力于生成灵活准确的时序边界与可靠的提名置信度。Among them, an important aspect for evaluating temporal action localization methods is the average recall rate. Most of the current methods focus on generating flexible and accurate temporal boundaries with reliable nomination confidence.

相关技术中,针对深度学习相关的方法主要分两类:In related technologies, methods related to deep learning are mainly divided into two categories:

①、基于预定义锚点框回归的方法生成可能包含行为的大量候选时序提名,然后通过分类任务选择正确的候选时序提名;1. A method based on predefined anchor box regression generates a large number of candidate time series nominations that may contain behaviors, and then selects the correct candidate time series nominations through the classification task;

②、建模视频帧时序关系,利用边界周围的局部细节来预测边界,用过边界组合生成时序提名。②. Model the video frame timing relationship, use the local details around the boundary to predict the boundary, and use the boundary combination to generate the timing nomination.

发明内容SUMMARY OF THE INVENTION

本申请提供了一种视频处理方法、装置、电子设备和存储介质。The present application provides a video processing method, apparatus, electronic device and storage medium.

根据本申请的一方面,提供了一种视频处理方法,包括:According to an aspect of the present application, a video processing method is provided, comprising:

获取视频;get video;

对所述视频进行特征提取,以获取所述视频的特征信息;performing feature extraction on the video to obtain feature information of the video;

调用边界预测模型对所述特征信息进行时序边界预测,以生成所述视频的时序边界;以及Calling a boundary prediction model to perform temporal boundary prediction on the feature information to generate a temporal boundary of the video; and

根据所述时序边界对所述视频进行切分,以生成所述视频的视频片段。The video is segmented according to the timing boundary to generate video segments of the video.

根据本申请的另一方面,提供了一种视频处理装置,包括:According to another aspect of the present application, a video processing apparatus is provided, comprising:

第一获取模块,用于获取视频;The first acquisition module is used to acquire the video;

第二获取模块,用于对所述视频进行特征提取,以获取所述视频的特征信息;A second acquisition module, configured to perform feature extraction on the video to acquire feature information of the video;

第一生成模块,用于调用边界预测模型对所述特征信息进行时序边界预测,以生成所述视频的时序边界;以及a first generation module for invoking a boundary prediction model to perform time-series boundary prediction on the feature information to generate a time-series boundary of the video; and

第二生成模块,用于根据所述时序边界对所述视频进行切分,以生成所述视频的视频片段。A second generating module, configured to segment the video according to the timing boundary to generate video segments of the video.

根据本申请的另一方面,提供了一种电子设备,包括:According to another aspect of the present application, an electronic device is provided, comprising:

至少一个处理器;以及at least one processor; and

与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein,

所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行上述一方面实施例所述的视频处理方法。The memory stores instructions executable by the at least one processor, the instructions are executed by the at least one processor, so that the at least one processor can execute the video processing method according to the embodiment of the above aspect .

根据本申请另一方面,提供了一种存储有计算机指令的非瞬时计算机可读存储介质,其上存储有计算机程序,所述计算机指令用于使所述计算机执行上述一方面实施例所述的视频处理方法。According to another aspect of the present application, a non-transitory computer-readable storage medium storing computer instructions is provided, and a computer program is stored thereon, and the computer instructions are used to cause the computer to execute the above-mentioned embodiments of the one aspect. video processing method.

根据本申请的另一方面,提供了一种计算机程序产品,包括计算机程序,所述计算机程序被处理器执行时实现上述一方面实施例所述的视频处理方法。According to another aspect of the present application, a computer program product is provided, including a computer program, which, when executed by a processor, implements the video processing method described in the embodiments of the foregoing aspect.

应当理解,本部分所描述的内容并非旨在标识本申请的实施例的关键或重要特征,也不用于限制本申请的范围。本申请的其它特征将通过以下的说明书而变得容易理解。It should be understood that the content described in this section is not intended to identify key or critical features of the embodiments of the application, nor is it intended to limit the scope of the application. Other features of the present application will become readily understood from the following description.

附图说明Description of drawings

附图用于更好地理解本方案,不构成对本申请的限定。其中:The accompanying drawings are used for better understanding of the present solution, and do not constitute a limitation to the present application. in:

图1为本申请实施例提供的一种视频处理方法的流程示意图;1 is a schematic flowchart of a video processing method provided by an embodiment of the present application;

图2为本申请实施例提供的另一种视频处理方法的流程示意图;2 is a schematic flowchart of another video processing method provided by an embodiment of the present application;

图3为本申请实施例提供的另一种视频处理方法的流程示意图;3 is a schematic flowchart of another video processing method provided by an embodiment of the present application;

图4为本申请具体实施例提供的生成视频的全局信息的示意图;4 is a schematic diagram of generating global information of a video provided by a specific embodiment of the present application;

图5为本申请实施例提供的另一种视频处理方法的流程示意图;5 is a schematic flowchart of another video processing method provided by an embodiment of the present application;

图6为本申请实施例提供的一种视频处理装置的结构示意图;以及FIG. 6 is a schematic structural diagram of a video processing apparatus provided by an embodiment of the present application; and

图7为根据本申请实施例的视频处理方法的电子设备的框图。FIG. 7 is a block diagram of an electronic device of a video processing method according to an embodiment of the present application.

具体实施方式Detailed ways

以下结合附图对本申请的示范性实施例做出说明,其中包括本申请实施例的各种细节以助于理解,应当将它们认为仅仅是示范性的。因此,本领域普通技术人员应当认识到,可以对这里描述的实施例做出各种改变和修改,而不会背离本申请的范围和精神。同样,为了清楚和简明,以下的描述中省略了对公知功能和结构的描述。Exemplary embodiments of the present application are described below with reference to the accompanying drawings, which include various details of the embodiments of the present application to facilitate understanding, and should be considered as exemplary only. Accordingly, those of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present application. Also, descriptions of well-known functions and constructions are omitted from the following description for clarity and conciseness.

下面参考附图描述本申请实施例的视频处理方法、装置、电子设备和存储介质。The video processing method, apparatus, electronic device, and storage medium according to the embodiments of the present application are described below with reference to the accompanying drawings.

人工智能是研究使用计算机来模拟人的某些思维过程和智能行为(如学习、推理、思考、规划等)的学科,既有硬件层面的技术领域也有软件层面的技术。人工智能硬件技术一般包括如传感器、专用人工智能芯片、云计算、分布式存储、大数据处理等技术;人工智能软件技术包括计算机视觉技术、语音识别技术、自然语言处理技术以及深度学习、大数据处理技术、知识图谱技术等几大方向。Artificial intelligence is a discipline that studies the use of computers to simulate certain thinking processes and intelligent behaviors of people (such as learning, reasoning, thinking, planning, etc.), both in the technical field of hardware and software. Artificial intelligence hardware technologies generally include technologies such as sensors, dedicated artificial intelligence chips, cloud computing, distributed storage, and big data processing; artificial intelligence software technologies include computer vision technology, speech recognition technology, natural language processing technology, as well as deep learning, big data and other technologies. Processing technology, knowledge graph technology and other major directions.

深度学习是机器学习领域中一个新的研究方向。深度学习是学习样本数据的内在规律和表示层次,这些学习过程中获得的信息对诸如文字,图像和声音等数据的解释有很大的帮助。它的最终目标是让机器能够像人一样具有分析学习能力,能够识别文字、图像和声音等数据。深度学习是一个复杂的机器学习算法,在语音和图像识别方面取得的效果,远远超过先前相关技术。Deep learning is a new research direction in the field of machine learning. Deep learning is to learn the inherent laws and representation levels of sample data, and the information obtained during these learning processes is of great help to the interpretation of data such as text, images, and sounds. Its ultimate goal is to enable machines to have the ability to analyze and learn like humans, and to recognize data such as words, images, and sounds. Deep learning is a complex machine learning algorithm that has achieved results in speech and image recognition far exceeding previous related technologies.

计算机视觉是一门研究如何使机器“看”的科学,更进一步的说,就是是指用摄影机和电脑代替人眼对目标进行识别、跟踪和测量等机器视觉,并进一步做图形处理,使电脑处理成为更适合人眼观察或传送给仪器检测的图像。作为一个科学学科,计算机视觉研究相关的理论和技术,试图建立能够从图像或者多维数据中获取‘信息’的人工智能系统。这里所指的信息指Shannon(香农公式)定义的,可以用来帮助做一个“决定”的信息。因为感知可以看作是从感官信号中提取信息,所以计算机视觉也可以看作是研究如何使人工系统从图像或多维数据中“感知”的科学。Computer vision is a science that studies how to make machines "see". More specifically, it refers to the use of cameras and computers instead of human eyes to identify, track, and measure targets, and further graphics processing to make computers. Processed into images that are more suitable for human observation or transmitted to instruments for detection. As a scientific discipline, computer vision studies related theories and technologies, trying to build artificial intelligence systems that can obtain 'information' from images or multi-dimensional data. The information referred to here refers to the information defined by Shannon (Shannon formula) that can be used to help make a "decision". Because perception can be viewed as extracting information from sensory signals, computer vision can also be viewed as the science of how to make artificial systems "perceive" from images or multidimensional data.

本申请实施例提供的视频处理方法,可以由电子设备来执行,该电子设备可为PC(Personal Computer,个人计算机)电脑、平板电脑或掌上电脑等,此处不做任何限定。The video processing method provided by the embodiments of the present application may be performed by an electronic device, and the electronic device may be a PC (Personal Computer) computer, a tablet computer, or a palmtop computer, etc., which is not limited herein.

在本申请实施例中,电子设备中可以设置有处理组件、存储组件和驱动组件。可选的,该驱动组件和处理组件可以集成设置,该存储组件可以存储操作系统、应用程序或其他程序模块,该处理组件通过执行存储组件中存储的应用程序来实现本申请实施例提供的视频处理方法。In this embodiment of the present application, the electronic device may be provided with a processing component, a storage component, and a driving component. Optionally, the driving component and the processing component may be integrated, the storage component may store an operating system, an application program or other program modules, and the processing component implements the video provided by the embodiments of the present application by executing the application program stored in the storage component. Approach.

图1为本申请实施例提供的一种视频处理方法的流程示意图。FIG. 1 is a schematic flowchart of a video processing method provided by an embodiment of the present application.

本申请实施例的视频处理方法,还可由本申请实施例提供的视频处理装置执行,该装置可配置于电子设备中,以实现对获取到的视频进行特征提取,以获取视频的特征信息,并调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,以及根据时序边界对视频进行切分,以生成视频的视频片段,从而能够提高时序边界预测的准确度。The video processing method in the embodiment of the present application can also be executed by the video processing apparatus provided in the embodiment of the present application, and the apparatus can be configured in an electronic device to perform feature extraction on the obtained video, so as to obtain feature information of the video, and The boundary prediction model is called to predict the time sequence boundary of the feature information to generate the time sequence boundary of the video, and the video is segmented according to the time sequence boundary to generate the video segment of the video, so that the accuracy of the time sequence boundary prediction can be improved.

作为一种可能的情况,本申请实施例的视频处理方法还可以在服务器端执行,该服务器可以为云服务器,可以在云端执行该视频处理方法。As a possible situation, the video processing method in the embodiment of the present application may also be executed on the server side, the server may be a cloud server, and the video processing method may be executed in the cloud.

如图1所示,该视频处理方法,可包括:As shown in Figure 1, the video processing method may include:

步骤101,获取视频。Step 101, acquiring a video.

在本申请实施例中,电子设备获取视频的途径可有多条,其中,①、电子设备可以从视频提供设备中获取视频,例如,电子设备可以通过视频对应的统一资源定位符(Uniform ResourceLocator,URL)从视频提供设备下载视频,其中,视频提供设备可包括数字通用光盘播放机、影音光盘播放机、服务器、U盘、智能硬盘和手机等;②、电子设备可存储有视频,电子设备可以从自身存储的视频中获取目标视频;③、电子设备可以通过内置的摄像头进行视频拍摄以获取视频。此处不做任何限定。In the embodiment of the present application, there may be multiple ways for the electronic device to obtain the video, wherein (1), the electronic device may obtain the video from the video providing device, for example, the electronic device may use the Uniform Resource Locator (Uniform Resource Locator) corresponding to the video. URL) to download videos from a video providing device, wherein the video providing device may include a digital universal CD player, an audio-visual CD player, a server, a U disk, a smart hard disk, a mobile phone, etc.; ②, the electronic device may store video, and the electronic device may Obtain the target video from the video stored by itself; 3. The electronic device can capture the video through the built-in camera to obtain the video. No limitation is made here.

作为一种可能的情况,上述的视频还可为用户通过相关的视频网站下载的视频。As a possible situation, the above video may also be a video downloaded by a user through a related video website.

需要说明的是,该实施例中所描述的视频可为用户想要进行时序动作定位以生产行为片段(即,视频片段)的目标视频。It should be noted that the video described in this embodiment may be a target video for which the user wants to perform time-series action positioning to produce behavior segments (ie, video segments).

步骤102,对视频进行特征提取,以获取视频的特征信息。Step 102: Perform feature extraction on the video to obtain feature information of the video.

在本申请实施例中,可根据预设的特征提取算法对视频进行特征提取,以获取视频的特征信息,其中,预设的特征提取算法可根据实际情况进行标定。In the embodiment of the present application, feature extraction may be performed on the video according to a preset feature extraction algorithm to obtain feature information of the video, wherein the preset feature extraction algorithm may be calibrated according to the actual situation.

具体地,电子设备在获取到视频之后,可根据预设的特征提取算法对视频进行特征提取,以获取视频的特征信息。其中该特征信息可为视频的特征序列信息。Specifically, after acquiring the video, the electronic device may perform feature extraction on the video according to a preset feature extraction algorithm to acquire feature information of the video. The feature information may be feature sequence information of the video.

作为一种可能的情况,电子设备在获取到视频之后,还可通过特征提取工具(例如,插件),对视频进行特征提取,以获取视频的特征信息。As a possible situation, after acquiring the video, the electronic device may further perform feature extraction on the video by using a feature extraction tool (eg, a plug-in) to obtain feature information of the video.

步骤103,调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界。Step 103 , call the boundary prediction model to perform time-series boundary prediction on the feature information, so as to generate the time-series boundary of the video.

需要说明的是,该实施例中所描述的边界预测模型可以是提前训练好的,并将其预存在电子设备的存储空间中,以方便调取应用,该存储空间不仅限于基于实体的存储空间,例如,硬盘,上述存储空间还可以是连接电子设备的网络硬盘的存储空间(云存储空间)。It should be noted that the boundary prediction model described in this embodiment may be trained in advance and pre-stored in the storage space of the electronic device to facilitate application retrieval, and the storage space is not limited to entity-based storage space For example, a hard disk, the above-mentioned storage space may also be a storage space (cloud storage space) of a network hard disk connected to the electronic device.

其中,上述的边界预测模型的训练与生成均可由相关的服务器执行,该服务器可以是云端服务器,也可以是一台电脑的主机,该服务器与可执行申请实施例提供的视频处理方法的电子设备之间,建立有通信连接,该通信连接可以是无线网络连接和有线网络连接的至少一种。服务器可将训练完成的边界预测模型发送给电子设备,以便电子设备在需要时调用,从而大大减少电子设备的计算压力。The training and generation of the above-mentioned boundary prediction model can be performed by a related server. The server can be a cloud server or a host of a computer. The server is connected to an electronic device that can execute the video processing method provided by the application embodiment. A communication connection is established therebetween, and the communication connection may be at least one of a wireless network connection and a wired network connection. The server can send the trained boundary prediction model to the electronic device, so that the electronic device can call it when needed, thereby greatly reducing the computing pressure of the electronic device.

具体地,电子设备在获取到视频的特征信息之后,可先从自身的存储空间中边界预测模型,然后将该特征信息输入至该边界预测模型,从而通过该边界预测模型对该特征信息进行时序边界预测,以得到该边界预测模型输出(生成)的视频的时序边界。Specifically, after acquiring the feature information of the video, the electronic device can firstly perform a boundary prediction model from its own storage space, and then input the feature information into the boundary prediction model, so that the feature information can be sequenced through the boundary prediction model. Boundary prediction to obtain the temporal boundary of the video output (generated) by the boundary prediction model.

步骤104,根据时序边界对视频进行切分,以生成视频的视频片段。Step 104 , segment the video according to the timing boundary to generate video segments of the video.

在本申请实施例中,上述的时序边界可为多个,且一个视频片段需要开始和结束两个时序边界才能确定,即开始时间和结束时间分别对应的时序边界。即,上述的时序边界可为多个,且可为偶数。In the embodiment of the present application, the above timing boundaries may be multiple, and a video segment needs two timing boundaries, the start and the end, to be determined, that is, the timing boundaries corresponding to the start time and the end time respectively. That is, the above-mentioned timing boundaries may be multiple, and may be an even number.

具体地,电子设备在得到视频的时序边界之后,可根据时序边界对视频进行切分,以生成视频的视频片段。Specifically, after obtaining the timing boundary of the video, the electronic device may segment the video according to the timing boundary to generate video segments of the video.

举例而言,假设时序边界为多个,其中,电子设备在得到视频的多个时序边界之后,可先对该多个时序边界进行分析,以确定多组开始时间和结束时间的时序边界,然后可分别根据每组开始时间和结束时间的时序边界对视频进行切分,以生成该视频的多个视频片段。For example, assuming that there are multiple timing boundaries, after obtaining the multiple timing boundaries of the video, the electronic device can first analyze the multiple timing boundaries to determine the timing boundaries of multiple sets of start times and end times, and then The video may be segmented according to the timing boundaries of each set of start time and end time, respectively, to generate multiple video segments of the video.

由此,相关人员可基于本申请实施例的视频处理方法,进行视频集锦生成、视频推荐和检索等。As a result, relevant personnel can perform video highlight generation, video recommendation, and retrieval based on the video processing method of the embodiment of the present application.

在本申请实施例中,首先获取视频,并对视频进行特征提取,以获取视频的特征信息,然后调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,最后根据时序边界对视频进行切分,以生成视频的视频片段。由此,能够提高时序边界预测的准确度,从而增加未切分视频中时序提名的召回率。In the embodiment of the present application, the video is first obtained, and feature extraction is performed on the video to obtain the feature information of the video, and then the boundary prediction model is called to perform temporal boundary prediction on the feature information to generate the temporal boundary of the video, and finally, according to the temporal boundary, the The video is segmented to generate video segments of the video. As a result, the accuracy of timing boundary prediction can be improved, thereby increasing the recall rate of timing nominations in unsegmented videos.

为了清楚说明上一实施例,在本申请的一个实施例中,如图2所示,对视频进行特征提取,以获取视频的特征信息,可包括:In order to clearly illustrate the previous embodiment, in an embodiment of the present application, as shown in FIG. 2 , feature extraction is performed on the video to obtain feature information of the video, which may include:

步骤201,获取特征提取模型。Step 201, acquiring a feature extraction model.

需要说明的是,该实施例中所描述的特征提取模型可以是提前训练好的,并将其预存在电子设备的存储空间中,以方便调取应用。It should be noted that, the feature extraction model described in this embodiment may be trained in advance and pre-stored in the storage space of the electronic device to facilitate retrieval of applications.

步骤202,将视频输入至特征提取模型。Step 202, input the video to the feature extraction model.

步骤203,通过特征提取模型对视频进行特征提取,以获取视频的特征信息。Step 203 , perform feature extraction on the video by using the feature extraction model to obtain feature information of the video.

具体地,电子设备在获取到视频之后,可从自身的存储空间中调出(获取)特征提取模型,并将该视频输入至该特征提取模型,该特征提取模型对视频进行特征提取,从而输出该视频的特征信息。由此,通过特征提取模型辅助视频特征信息的提取,可以提高识别的精确度。Specifically, after acquiring the video, the electronic device can call up (obtain) a feature extraction model from its own storage space, and input the video into the feature extraction model, and the feature extraction model performs feature extraction on the video to output Feature information of the video. Therefore, by assisting the extraction of video feature information by the feature extraction model, the recognition accuracy can be improved.

进一步地,在本申请的一个实施例中,边界预测模型可为基于Transformer机制的边界预测模型,如图3所示,该边界预测模型通过以下步骤对特征信息进行时序边界预测,以生成视频的时序边界:Further, in an embodiment of the present application, the boundary prediction model may be a boundary prediction model based on the Transformer mechanism, as shown in FIG. 3 , the boundary prediction model performs time series boundary prediction on the feature information through the following steps to generate a video. Timing Boundaries:

步骤301,根据特征信息生成多个特征向量。Step 301: Generate multiple feature vectors according to the feature information.

具体地,电子设备将视频的特征信息输入至该边界预测模型之后,该边界预测模型可通过Transformer内部的self-attention(自注意力机制)将该特征信息映射成不同的向量(即,多个特征向量)Specifically, after the electronic device inputs the feature information of the video into the boundary prediction model, the boundary prediction model can map the feature information into different vectors (that is, multiple Feature vector)

步骤302,根据多个特征向量生成视频的全局信息。Step 302: Generate global information of the video according to the plurality of feature vectors.

在本申请实施例中,可通过计算不同时序的相似度来融合多个特征向量,从而得到视频的全局信息。In this embodiment of the present application, multiple feature vectors can be fused by calculating the similarity of different time series, so as to obtain the global information of the video.

为了清楚说明上一实施例,在本申请的一个实施例中,多个特征向量可包括第一特征向量、第二特征向量和第三特征向量,根据多个特征向量生成视频的全局信息,可包括:获取第一特征向量的维度,并根据第一特征向量、第二特征向量、第三特征向量和维度,生成视频的全局信息。In order to clearly illustrate the previous embodiment, in an embodiment of the present application, multiple feature vectors may include a first feature vector, a second feature vector, and a third feature vector, and the global information of the video is generated according to the multiple feature vectors, which may be The method includes: acquiring the dimension of the first feature vector, and generating global information of the video according to the first feature vector, the second feature vector, the third feature vector and the dimension.

需要说明的是,该实施例中所描述的第一特征向量、第二特征向量和第三特征向量可分别为Query向量、Key向量和Value向量。It should be noted that, the first feature vector, the second feature vector, and the third feature vector described in this embodiment may be a Query vector, a Key vector, and a Value vector, respectively.

具体地,参见,图假设,多个特征向量可包括第一特征向量(Query向量)、第二特征向量(Key向量)和第三特征向量(Value向量),其中,上述的边界预测模型在得到多个特征向量之后,先获取第一特征向量(Query向量)的维度,将第一特征向量(Query向量)与第二特征向量(Key向量)相乘以得到第一中间值,然后将该第一中间值与上述的维度相除以得到第二中间值,最后将第二中间值与第三特征向量(Value向量)相乘以得到视频的全局信息。Specifically, referring to the figure, it is assumed that a plurality of feature vectors may include a first feature vector (Query vector), a second feature vector (Key vector), and a third feature vector (Value vector), wherein the above-mentioned boundary prediction model is obtained after obtaining After multiple eigenvectors, first obtain the dimension of the first eigenvector (Query vector), multiply the first eigenvector (Query vector) with the second eigenvector (Key vector) to obtain the first intermediate value, and then obtain the first intermediate value. An intermediate value is divided by the above-mentioned dimension to obtain the second intermediate value, and finally the second intermediate value is multiplied by the third feature vector (Value vector) to obtain the global information of the video.

步骤303,根据全局信息生成视频的时序边界。Step 303, generating a video timing boundary according to the global information.

具体地,边界预测模型在通过Transformer内部的self-attention(自注意力机制)计算得到上述的全局信息之后,可根据该全局信息进行时序边界预测,以生成视频的时序边界。Specifically, after calculating the above-mentioned global information through the self-attention mechanism inside the Transformer, the boundary prediction model can perform timing boundary prediction according to the global information to generate the timing boundary of the video.

由此,通过边界预测模型可以有效地对视频的特征信息进行时序边界预测,从而提高时序边界预测的准确度。Therefore, the feature information of the video can be effectively predicted by the boundary prediction model, thereby improving the accuracy of the time series boundary prediction.

进一步地,在本申请的一个实施例中,如图5所示,边界预测模型可通过以下方式生成:Further, in an embodiment of the present application, as shown in FIG. 5 , the boundary prediction model can be generated in the following manner:

步骤501,获取样本视频,并获取样本视频中多个样本视频片段的标签。In step 501, a sample video is obtained, and tags of multiple sample video segments in the sample video are obtained.

在本申请实施例中,获取样本视频的途径有多条,其中,可通过相关的视频网站下载视频以获取样本视频,还可人为主动创造样本视频,例如,通过视频摄像头进行视频拍摄以获取样本视频,此处不做任何限定。In the embodiment of the present application, there are multiple ways to obtain sample videos. Among them, videos can be downloaded through relevant video websites to obtain sample videos, and sample videos can also be created manually. For example, video cameras can be used to capture samples to obtain samples. Video, without any limitation here.

在本申请实施例中,上述的标签可以是相关人员在样本视频中标注好的,并将其预存在电子设备的存储空间中,以方便调取应用。In the embodiment of the present application, the above-mentioned tags may be marked by relevant personnel in the sample video, and are pre-stored in the storage space of the electronic device to facilitate retrieval of applications.

步骤502,对样本视频进行特征提取,以获取样本视频的样本特征信息。Step 502, perform feature extraction on the sample video to obtain sample feature information of the sample video.

在本申请实施例中,可根据上述预设的特征提取算法对样本视频进行特征提取,以获取样本视频的样本特征信息。In the embodiment of the present application, feature extraction may be performed on the sample video according to the above-mentioned preset feature extraction algorithm, so as to obtain sample feature information of the sample video.

具体地,在获取到样本视频之后,可根据该样本视频从自身的存储空间中获取该样本视频中多个样本视频片段的标签。然后可根据上述预设的特征提取算法对样本视频进行特征提取,以获取样本视频的样本特征信息。Specifically, after the sample video is obtained, tags of multiple sample video segments in the sample video can be obtained from the storage space of the sample video according to the sample video. Then, feature extraction can be performed on the sample video according to the above-mentioned preset feature extraction algorithm to obtain sample feature information of the sample video.

作为一种可能的情况,在获取到样本视频中多个样本视频片段的标签之后,还可根据上述的特征提取模型对样本视频进行特征提取,以获取样本视频的样本特征信息。As a possible situation, after the labels of multiple sample video clips in the sample video are obtained, feature extraction may be performed on the sample video according to the above-mentioned feature extraction model to obtain sample feature information of the sample video.

作为另一种可能的情况,在获取到样本视频中多个样本视频片段的标签之后,还可通过特征提取工具(例如,插件),对样本视频进行特征提取,以获取样本视频的样本特征信息。As another possible situation, after the labels of multiple sample video clips in the sample video are obtained, a feature extraction tool (for example, a plug-in) can also be used to perform feature extraction on the sample video to obtain sample feature information of the sample video .

步骤503,将样本特征信息输入边界预测模型以生成预测的边界分数。Step 503 , input the sample feature information into the boundary prediction model to generate a predicted boundary score.

步骤504,根据预测的边界分数和标签生成损失值,并根据损失值对边界预测模型进行训练。Step 504: Generate a loss value according to the predicted boundary score and label, and train the boundary prediction model according to the loss value.

具体地,在获取到样本视频的样本特征信息之后,可将该样本特征信息输入边界预测模型以生成预测的边界分数,然后根据预测的边界分数和上述的标签生成损失值,并根据损失值对边界预测模型进行训练(例如,使用随机梯度下降(stochastic gradientdescent,SGD)来优化这损失值,不断更新边界预测模型中的网络权重层和缩放参数,直到损失值收敛训练停止),从而优化边界预测模型,提高识别的准确度。Specifically, after obtaining the sample feature information of the sample video, the sample feature information can be input into the boundary prediction model to generate the predicted boundary score, and then the loss value is generated according to the predicted boundary score and the above label, and according to the loss value The boundary prediction model is trained (for example, using stochastic gradient descent (SGD) to optimize the loss value, and the network weight layers and scaling parameters in the boundary prediction model are continuously updated until the loss value converges and training stops), thereby optimizing the boundary prediction model to improve the recognition accuracy.

需要说明的是,该实施例中所描述的损失值可通过分类交叉熵计算得到。It should be noted that the loss value described in this embodiment can be obtained by calculating the categorical cross entropy.

根据本申请实施例的视频处理方法,首先获取视频,并对视频进行特征提取,以获取视频的特征信息,然后调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,最后根据时序边界对视频进行切分,以生成视频的视频片段。由此,能够提高时序边界预测的准确度,从而增加未切分视频中时序提名的召回率。According to the video processing method of the embodiment of the present application, firstly obtain the video, and perform feature extraction on the video to obtain the feature information of the video, and then call the boundary prediction model to predict the time sequence boundary of the feature information to generate the time sequence boundary of the video, and finally according to Timing boundaries slice the video to generate video segments of the video. As a result, the accuracy of timing boundary prediction can be improved, thereby increasing the recall rate of timing nominations in unsegmented videos.

图6为本申请实施例提供的一种视频处理装置的结构示意图。FIG. 6 is a schematic structural diagram of a video processing apparatus according to an embodiment of the present application.

本申请实施例的视频处理装置,可配置于电子设备中,以实现对获取到的视频进行特征提取,以获取视频的特征信息,并调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,以及根据时序边界对视频进行切分,以生成视频的视频片段,从而能够提高时序边界预测的准确度。The video processing apparatus according to the embodiment of the present application may be configured in an electronic device to perform feature extraction on the acquired video to obtain feature information of the video, and call a boundary prediction model to perform time-series boundary prediction on the feature information to generate a video and segment the video according to the timing boundary to generate video segments of the video, thereby improving the accuracy of timing boundary prediction.

如图6所示,该视频处理装置600,可包括:第一获取模块610、第二获取模块620、第一生成模块630和第二生成模块640。As shown in FIG. 6 , the video processing apparatus 600 may include: a first obtaining module 610 , a second obtaining module 620 , a first generating module 630 and a second generating module 640 .

其中,第一获取模块610用于获取视频。Wherein, the first acquisition module 610 is used for acquiring video.

在本申请实施例中,第一获取模块610获取视频的途径可有多条,其中,①、第一获取模块610可以从视频提供设备中获取视频,例如,电子设备可以通过视频对应的统一资源定位符(Uniform ResourceLocator,URL)从视频提供设备下载视频,其中,视频提供设备可包括数字通用光盘播放机、影音光盘播放机、服务器、U盘、智能硬盘和手机等;②、电子设备可存储有视频,第一获取模块610可以从电子设备存储的视频中获取目标视频;③、第一获取模块610可以通过电子设备内置的摄像头进行视频拍摄以获取视频。此处不做任何限定。In this embodiment of the present application, the first acquisition module 610 may acquire videos in multiple ways, wherein ①, the first acquisition module 610 may acquire videos from a video providing device, for example, an electronic device may use a unified resource corresponding to the video The locator (Uniform ResourceLocator, URL) downloads the video from the video providing device, wherein the video providing device may include a digital universal CD player, an audio-visual CD player, a server, a U disk, a smart hard disk, a mobile phone, etc.; ②, the electronic device can store If there is a video, the first acquisition module 610 can acquire the target video from the video stored in the electronic device; 3. The first acquisition module 610 can capture the video through a camera built in the electronic device to acquire the video. No limitation is made here.

作为一种可能的情况,上述的视频还可为用户通过相关的视频网站下载的视频。As a possible situation, the above video may also be a video downloaded by a user through a related video website.

需要说明的是,该实施例中所描述的视频可为用户想要进行时序动作定位以生产行为片段(即,视频片段)的目标视频。It should be noted that the video described in this embodiment may be a target video for which the user wants to perform time-series action positioning to produce behavior segments (ie, video segments).

第二获取模块620用于对视频进行特征提取,以获取视频的特征信息。The second obtaining module 620 is configured to perform feature extraction on the video to obtain feature information of the video.

在本申请实施例中,可根据预设的特征提取算法对视频进行特征提取,以获取视频的特征信息,其中,预设的特征提取算法可根据实际情况进行标定。In the embodiment of the present application, feature extraction may be performed on the video according to a preset feature extraction algorithm to obtain feature information of the video, wherein the preset feature extraction algorithm may be calibrated according to the actual situation.

具体地,在第一获取模块610获取到视频之后,第二获取模块620可根据预设的特征提取算法对视频进行特征提取,以获取视频的特征信息。其中该特征信息可为视频的特征序列信息。Specifically, after the first obtaining module 610 obtains the video, the second obtaining module 620 may perform feature extraction on the video according to a preset feature extraction algorithm to obtain feature information of the video. The feature information may be feature sequence information of the video.

作为一种可能的情况,第二获取模块620在获取到视频之后,还可通过特征提取工具(例如,插件),对视频进行特征提取,以获取视频的特征信息。As a possible situation, after acquiring the video, the second acquiring module 620 may further perform feature extraction on the video by using a feature extraction tool (eg, a plug-in) to acquire feature information of the video.

第一生成模块630用于调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界。The first generation module 630 is configured to call the boundary prediction model to perform temporal boundary prediction on the feature information, so as to generate the temporal boundary of the video.

需要说明的是,该实施例中所描述的边界预测模型可以是提前训练好的,并将其预存在电子设备的存储空间中,以方便调取应用,该存储空间不仅限于基于实体的存储空间,例如,硬盘,上述存储空间还可以是连接电子设备的网络硬盘的存储空间(云存储空间)。It should be noted that the boundary prediction model described in this embodiment may be trained in advance and pre-stored in the storage space of the electronic device to facilitate application retrieval, and the storage space is not limited to entity-based storage space For example, a hard disk, the above-mentioned storage space may also be a storage space (cloud storage space) of a network hard disk connected to the electronic device.

其中,上述的边界预测模型的训练与生成均可由相关的服务器执行,该服务器可以是云端服务器,也可以是一台电脑的主机,该服务器与可设置申请实施例提供的视频处理装置的电子设备之间,建立有通信连接,该通信连接可以是无线网络连接和有线网络连接的至少一种。服务器可将训练完成的边界预测模型发送给电子设备,以便电子设备在需要时调用,从而大大减少电子设备的计算压力。The training and generation of the above-mentioned boundary prediction model can be performed by a related server. The server can be a cloud server or a computer host. A communication connection is established therebetween, and the communication connection may be at least one of a wireless network connection and a wired network connection. The server can send the trained boundary prediction model to the electronic device, so that the electronic device can call it when needed, thereby greatly reducing the computing pressure of the electronic device.

具体地,在第二获取模块620获取到视频的特征信息之后,第一生成模块630可先从自身的存储空间中边界预测模型,然后将该特征信息输入至该边界预测模型,从而通过该边界预测模型对该特征信息进行时序边界预测,以得到该边界预测模型输出(生成)的视频的时序边界。Specifically, after the second acquisition module 620 acquires the feature information of the video, the first generation module 630 can first predict the model from the boundary in its own storage space, and then input the feature information into the boundary prediction model, so as to pass the boundary The prediction model performs temporal boundary prediction on the feature information to obtain temporal boundaries of the video output (generated) by the boundary prediction model.

第二生成模块640用于根据时序边界对视频进行切分,以生成视频的视频片段。The second generating module 640 is configured to segment the video according to the time sequence boundary to generate video segments of the video.

在本申请实施例中,上述的时序边界可为多个,且一个视频片段需要开始和结束两个时序边界才能确定,即开始时间和结束时间分别对应的时序边界。即,上述的时序边界可为多个,且可为偶数。In the embodiment of the present application, the above timing boundaries may be multiple, and a video segment needs two timing boundaries, the start and the end, to be determined, that is, the timing boundaries corresponding to the start time and the end time respectively. That is, the above-mentioned timing boundaries may be multiple, and may be an even number.

具体地,在第一生成模块630得到视频的时序边界之后,第二生成模块640可根据时序边界对视频进行切分,以生成视频的视频片段。Specifically, after the first generation module 630 obtains the timing boundary of the video, the second generation module 640 may segment the video according to the timing boundary to generate video segments of the video.

举例而言,假设时序边界为多个,其中,在第一生成模块630得到视频的多个时序边界之后,第二生成模块640可先对该多个时序边界进行分析,以确定多组开始时间和结束时间的时序边界,然后可分别根据每组开始时间和结束时间的时序边界对视频进行切分,以生成该视频的多个视频片段。For example, assuming that there are multiple timing boundaries, after the first generation module 630 obtains the multiple timing boundaries of the video, the second generation module 640 can first analyze the multiple timing boundaries to determine multiple sets of start times and the timing boundaries of the end time, and then the video may be segmented according to the timing boundaries of each set of start time and end time, respectively, to generate multiple video segments of the video.

在本申请实施例中,通过第一获取模块获取视频,并通过第二获取模块对视频进行特征提取,以获取视频的特征信息,然后通过第一生成模块调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,最后通过第二生成模块根据时序边界对视频进行切分,以生成视频的视频片段。由此,能够提高时序边界预测的准确度,从而增加未切分视频中时序提名的召回率。In the embodiment of the present application, a video is acquired through a first acquisition module, and feature extraction is performed on the video through a second acquisition module to acquire feature information of the video, and then a boundary prediction model is invoked by the first generation module to perform a time-series boundary on the feature information. Prediction is used to generate the temporal boundary of the video, and finally the second generation module divides the video according to the temporal boundary to generate video segments of the video. As a result, the accuracy of timing boundary prediction can be improved, thereby increasing the recall rate of timing nominations in unsegmented videos.

在本申请的一个实施例中,第二获取模块620具体用于:获取特征提取模型;将视频输入至特征提取模型;通过特征提取模型对视频进行特征提取,以获取视频的特征信息。In an embodiment of the present application, the second obtaining module 620 is specifically configured to: obtain a feature extraction model; input the video into the feature extraction model; perform feature extraction on the video through the feature extraction model to obtain feature information of the video.

在本申请的一个实施例中,边界预测模型可为基于Transformer机制的边界预测模型,如图6所示,第一生成模块630可包括:第一生成单元631、第二生成单元632和第三生成单元633。In an embodiment of the present application, the boundary prediction model may be a boundary prediction model based on the Transformer mechanism. As shown in FIG. 6 , the first generation module 630 may include: a first generation unit 631 , a second generation unit 632 and a third generation unit 632 . Generation unit 633.

其中,第一生成单元631用于根据特征信息生成多个特征向量。The first generating unit 631 is configured to generate multiple feature vectors according to the feature information.

第二生成单元632用于根据多个特征向量生成视频的全局信息。The second generating unit 632 is configured to generate global information of the video according to the plurality of feature vectors.

第三生成单元633用于根据全局信息生成视频的时序边界。The third generating unit 633 is configured to generate the timing boundary of the video according to the global information.

在本申请的一个实施例中,多个特征向量可包括第一特征向量、第二特征向量和第三特征向量,第二生成单元632具体用于:获取第一特征向量的维度;根据第一特征向量、第二特征向量、第三特征向量和维度,生成视频的全局信息。In an embodiment of the present application, the plurality of feature vectors may include a first feature vector, a second feature vector, and a third feature vector, and the second generating unit 632 is specifically configured to: obtain the dimension of the first feature vector; The feature vector, the second feature vector, the third feature vector and the dimension, generate global information of the video.

在本申请的一个实施例中,如图6所示,该视频处理装置600还可包括:训练模块650,其中,训练模块650用于通过以下方式生成边界预测模型:获取样本视频,并获取样本视频中多个样本视频片段的标签;对样本视频进行特征提取,以获取样本视频的样本特征信息;将样本特征信息输入边界预测模型以生成预测的边界分数;根据预测的边界分数和标签生成损失值,并根据损失值对边界预测模型进行训练。In an embodiment of the present application, as shown in FIG. 6 , the video processing apparatus 600 may further include: a training module 650, wherein the training module 650 is configured to generate a boundary prediction model by: acquiring a sample video, and acquiring a sample Labels of multiple sample video clips in the video; perform feature extraction on the sample video to obtain sample feature information of the sample video; input the sample feature information into the boundary prediction model to generate the predicted boundary score; generate the loss according to the predicted boundary score and label value, and train the boundary prediction model based on the loss value.

需要说明的是,前述对视频处理方法实施例的解释说明也适用于该实施例的视频处理装置,此处不再赘述。It should be noted that, the foregoing explanation of the video processing method embodiment is also applicable to the video processing apparatus of this embodiment, and details are not repeated here.

本申请实施例的视频处理装置,通过第一获取模块获取视频,并通过第二获取模块对视频进行特征提取,以获取视频的特征信息,然后通过第一生成模块调用边界预测模型对特征信息进行时序边界预测,以生成视频的时序边界,最后通过第二生成模块根据时序边界对视频进行切分,以生成视频的视频片段。由此,能够提高时序边界预测的准确度,从而增加未切分视频中时序提名的召回率。In the video processing apparatus according to the embodiment of the present application, the video is acquired by the first acquisition module, the feature extraction is performed on the video by the second acquisition module to acquire the feature information of the video, and then the feature information is processed by the first generation module by calling the boundary prediction model. The timing boundary is predicted to generate the timing boundary of the video, and finally the second generation module divides the video according to the timing boundary to generate video segments of the video. As a result, the accuracy of timing boundary prediction can be improved, thereby increasing the recall rate of timing nominations in unsegmented videos.

本申请的技术方案中,所涉及的用户个人信息的获取,存储和应用等,均符合相关法律法规的规定,且不违背公序良俗。In the technical solution of this application, the acquisition, storage and application of the user's personal information involved are in compliance with the relevant laws and regulations, and do not violate public order and good customs.

根据本申请的实施例,本申请还提供了一种电子设备、一种可读存储介质和一种计算机程序产品。According to the embodiments of the present application, the present application further provides an electronic device, a readable storage medium, and a computer program product.

图7示出了可以用来实施本申请的实施例的示例电子设备700的示意性框图。电子设备旨在表示各种形式的数字计算机,诸如,膝上型计算机、台式计算机、工作台、个人数字助理、服务器、刀片式服务器、大型计算机、和其它适合的计算机。电子设备还可以表示各种形式的移动装置,诸如,个人数字处理、蜂窝电话、智能电话、可穿戴设备和其它类似的计算装置。本文所示的部件、它们的连接和关系、以及它们的功能仅仅作为示例,并且不意在限制本文中描述的和/或者要求的本申请的实现。FIG. 7 shows a schematic block diagram of an example electronic device 700 that may be used to implement embodiments of the present application. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframe computers, and other suitable computers. Electronic devices may also represent various forms of mobile devices, such as personal digital processors, cellular phones, smart phones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions are by way of example only, and are not intended to limit implementations of the application described and/or claimed herein.

如图7所示,设备700包括计算单元701,其可以根据存储在只读存储器(ROM)702中的计算机程序或者从存储单元708加载到随机访问存储器(RAM)703中的计算机程序,来执行各种适当的动作和处理。在RAM 703中,还可存储设备700操作所需的各种程序和数据。计算单元701、ROM 702以及RAM 703通过总线704彼此相连。输入/输出(I/O)接口705也连接至总线704。As shown in FIG. 7 , the device 700 includes a computing unit 701 that can be executed according to a computer program stored in a read only memory (ROM) 702 or loaded into a random access memory (RAM) 703 from a storage unit 708 Various appropriate actions and handling. In the RAM 703, various programs and data necessary for the operation of the device 700 can also be stored. The computing unit 701 , the ROM 702 , and the RAM 703 are connected to each other through a bus 704 . An input/output (I/O) interface 705 is also connected to bus 704 .

设备700中的多个部件连接至I/O接口705,包括:输入单元706,例如键盘、鼠标等;输出单元707,例如各种类型的显示器、扬声器等;存储单元708,例如磁盘、光盘等;以及通信单元709,例如网卡、调制解调器、无线通信收发机等。通信单元709允许设备700通过诸如因特网的计算机网络和/或各种电信网络与其他设备交换信息/数据。Various components in the device 700 are connected to the I/O interface 705, including: an input unit 706, such as a keyboard, mouse, etc.; an output unit 707, such as various types of displays, speakers, etc.; a storage unit 708, such as a magnetic disk, an optical disk, etc. ; and a communication unit 709, such as a network card, a modem, a wireless communication transceiver, and the like. The communication unit 709 allows the device 700 to exchange information/data with other devices through a computer network such as the Internet and/or various telecommunication networks.

计算单元701可以是各种具有处理和计算能力的通用和/或专用处理组件。计算单元701的一些示例包括但不限于中央处理单元(CPU)、图形处理单元(GPU)、各种专用的人工智能(AI)计算芯片、各种运行机器学习模型算法的计算单元、数字信号处理器(DSP)、以及任何适当的处理器、控制器、微控制器等。计算单元701执行上文所描述的各个方法和处理,例如视频处理方法。例如,在一些实施例中,视频处理方法可被实现为计算机软件程序,其被有形地包含于机器可读介质,例如存储单元708。在一些实施例中,计算机程序的部分或者全部可以经由ROM 702和/或通信单元709而被载入和/或安装到设备700上。当计算机程序加载到RAM 703并由计算单元701执行时,可以执行上文描述的视频处理方法的一个或多个步骤。备选地,在其他实施例中,计算单元701可以通过其他任何适当的方式(例如,借助于固件)而被配置为执行视频处理方法。Computing unit 701 may be various general-purpose and/or special-purpose processing components with processing and computing capabilities. Some examples of computing units 701 include, but are not limited to, central processing units (CPUs), graphics processing units (GPUs), various specialized artificial intelligence (AI) computing chips, various computing units that run machine learning model algorithms, digital signal processing processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 701 performs the various methods and processes described above, such as video processing methods. For example, in some embodiments, the video processing method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 708 . In some embodiments, part or all of the computer program may be loaded and/or installed on device 700 via ROM 702 and/or communication unit 709 . When a computer program is loaded into RAM 703 and executed by computing unit 701, one or more steps of the video processing method described above may be performed. Alternatively, in other embodiments, the computing unit 701 may be configured to perform the video processing method by any other suitable means (eg, by means of firmware).

本文中以上描述的系统和技术的各种实施方式可以在数字电子电路系统、集成电路系统、场可编程门阵列(FPGA)、专用集成电路(ASIC)、专用标准产品(ASSP)、芯片上系统的系统(SOC)、负载可编程逻辑设备(CPLD)、计算机硬件、固件、软件、和/或它们的组合中实现。这些各种实施方式可以包括:实施在一个或者多个计算机程序中,该一个或者多个计算机程序可在包括至少一个可编程处理器的可编程系统上执行和/或解释,该可编程处理器可以是专用或者通用可编程处理器,可以从存储系统、至少一个输入装置、和至少一个输出装置接收数据和指令,并且将数据和指令传输至该存储系统、该至少一个输入装置、和该至少一个输出装置。Various implementations of the systems and techniques described herein above may be implemented in digital electronic circuitry, integrated circuit systems, field programmable gate arrays (FPGAs), application specific integrated circuits (ASICs), application specific standard products (ASSPs), systems on chips system (SOC), load programmable logic device (CPLD), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include being implemented in one or more computer programs executable and/or interpretable on a programmable system including at least one programmable processor that The processor, which may be a special purpose or general-purpose programmable processor, may receive data and instructions from a storage system, at least one input device, and at least one output device, and transmit data and instructions to the storage system, the at least one input device, and the at least one output device an output device.

用于实施本申请的方法的程序代码可以采用一个或多个编程语言的任何组合来编写。这些程序代码可以提供给通用计算机、专用计算机或其他可编程数据处理装置的处理器或控制器,使得程序代码当由处理器或控制器执行时使流程图和/或框图中所规定的功能/操作被实施。程序代码可以完全在机器上执行、部分地在机器上执行,作为独立软件包部分地在机器上执行且部分地在远程机器上执行或完全在远程机器或服务器上执行。Program code for implementing the methods of the present application may be written in any combination of one or more programming languages. These program codes may be provided to a processor or controller of a general purpose computer, special purpose computer or other programmable data processing apparatus, such that the program code, when executed by the processor or controller, performs the functions/functions specified in the flowcharts and/or block diagrams. Action is implemented. The program code may execute entirely on the machine, partly on the machine, partly on the machine and partly on a remote machine as a stand-alone software package or entirely on the remote machine or server.

在本申请的上下文中,机器可读介质可以是有形的介质,其可以包含或存储以供指令执行系统、装置或设备使用或与指令执行系统、装置或设备结合地使用的程序。机器可读介质可以是机器可读信号介质或机器可读储存介质。机器可读介质可以包括但不限于电子的、磁性的、光学的、电磁的、红外的、或半导体系统、装置或设备,或者上述内容的任何合适组合。机器可读存储介质的更具体示例会包括基于一个或多个线的电气连接、便携式计算机盘、硬盘、随机存取存储器(RAM)、只读存储器(ROM)、可擦除可编程只读存储器(EPROM或快闪存储器)、光纤、便捷式紧凑盘只读存储器(CD-ROM)、光学储存设备、磁储存设备、或上述内容的任何合适组合。In the context of this application, a machine-readable medium may be a tangible medium that may contain or store the program for use by or in connection with the instruction execution system, apparatus or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. Machine-readable media may include, but are not limited to, electronic, magnetic, optical, electromagnetic, infrared, or semiconductor systems, devices, or devices, or any suitable combination of the foregoing. More specific examples of machine-readable storage media would include one or more wire-based electrical connections, portable computer disks, hard disks, random access memory (RAM), read only memory (ROM), erasable programmable read only memory (EPROM or flash memory), fiber optics, compact disk read only memory (CD-ROM), optical storage, magnetic storage, or any suitable combination of the foregoing.

为了提供与用户的交互,可以在计算机上实施此处描述的系统和技术,该计算机具有:用于向用户显示信息的显示装置(例如,CRT(阴极射线管)或者LCD(液晶显示器)监视器);以及键盘和指向装置(例如,鼠标或者轨迹球),用户可以通过该键盘和该指向装置来将输入提供给计算机。其它种类的装置还可以用于提供与用户的交互;例如,提供给用户的反馈可以是任何形式的传感反馈(例如,视觉反馈、听觉反馈、或者触觉反馈);并且可以用任何形式(包括声输入、语音输入或者、触觉输入)来接收来自用户的输入。To provide interaction with a user, the systems and techniques described herein may be implemented on a computer having a display device (eg, a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for displaying information to the user ); and a keyboard and pointing device (eg, a mouse or trackball) through which a user can provide input to the computer. Other kinds of devices can also be used to provide interaction with the user; for example, the feedback provided to the user can be any form of sensory feedback (eg, visual feedback, auditory feedback, or tactile feedback); and can be in any form (including acoustic input, voice input, or tactile input) to receive input from the user.

可以将此处描述的系统和技术实施在包括后台部件的计算系统(例如,作为数据服务器)、或者包括中间件部件的计算系统(例如,应用服务器)、或者包括前端部件的计算系统(例如,具有图形用户界面或者网络浏览器的用户计算机,用户可以通过该图形用户界面或者该网络浏览器来与此处描述的系统和技术的实施方式交互)、或者包括这种后台部件、中间件部件、或者前端部件的任何组合的计算系统中。可以通过任何形式或者介质的数字数据通信(例如,通信网络)来将系统的部件相互连接。通信网络的示例包括:局域网(LAN)、广域网(WAN)、互联网和区块链网络。The systems and techniques described herein may be implemented on a computing system that includes back-end components (eg, as a data server), or a computing system that includes middleware components (eg, an application server), or a computing system that includes front-end components (eg, a user's computer having a graphical user interface or web browser through which a user may interact with implementations of the systems and techniques described herein), or including such backend components, middleware components, Or any combination of front-end components in a computing system. The components of the system may be interconnected by any form or medium of digital data communication (eg, a communication network). Examples of communication networks include: Local Area Networks (LANs), Wide Area Networks (WANs), the Internet, and blockchain networks.

计算机系统可以包括客户端和服务器。客户端和服务器一般远离彼此并且通常通过通信网络进行交互。通过在相应的计算机上运行并且彼此具有客户端-服务器关系的计算机程序来产生客户端和服务器的关系。服务器可以是云服务器,也可以为分布式系统的服务器,或者是结合了区块链的服务器。A computer system can include clients and servers. Clients and servers are generally remote from each other and usually interact through a communication network. The relationship of client and server arises by computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, a distributed system server, or a server combined with blockchain.

应该理解,可以使用上面所示的各种形式的流程,重新排序、增加或删除步骤。例如,本发申请中记载的各步骤可以并行地执行也可以顺序地执行也可以不同的次序执行,只要能够实现本申请公开的技术方案所期望的结果,本文在此不进行限制。It should be understood that steps may be reordered, added or deleted using the various forms of flow shown above. For example, the steps described in the present application can be performed in parallel, sequentially or in different orders, and as long as the desired results of the technical solutions disclosed in the present application can be achieved, no limitation is imposed herein.

上述具体实施方式,并不构成对本申请保护范围的限制。本领域技术人员应该明白的是,根据设计要求和其他因素,可以进行各种修改、组合、子组合和替代。任何在本申请的精神和原则之内所作的修改、等同替换和改进等,均应包含在本申请保护范围之内。The above-mentioned specific embodiments do not constitute a limitation on the protection scope of the present application. It should be understood by those skilled in the art that various modifications, combinations, sub-combinations and substitutions may occur depending on design requirements and other factors. Any modifications, equivalent replacements and improvements made within the spirit and principles of this application shall be included within the protection scope of this application.

Claims (13)

1.一种视频处理方法,包括:1. A video processing method, comprising: 获取视频;get video; 对所述视频进行特征提取,以获取所述视频的特征信息;performing feature extraction on the video to obtain feature information of the video; 调用边界预测模型对所述特征信息进行时序边界预测,以生成所述视频的时序边界;以及Calling a boundary prediction model to perform temporal boundary prediction on the feature information to generate a temporal boundary of the video; and 根据所述时序边界对所述视频进行切分,以生成所述视频的视频片段。The video is segmented according to the timing boundary to generate video segments of the video. 2.根据权利要求1所述的方法,其中,所述对所述视频进行特征提取,以获取所述视频的特征信息,包括:2. The method according to claim 1, wherein the performing feature extraction on the video to obtain feature information of the video, comprising: 获取特征提取模型;Get the feature extraction model; 将所述视频输入至所述特征提取模型;inputting the video to the feature extraction model; 通过所述特征提取模型对所述视频进行特征提取,以获取所述视频的特征信息。Feature extraction is performed on the video through the feature extraction model to obtain feature information of the video. 3.根据权利要求1所述的方法,其中,所述边界预测模型为基于Transformer机制的边界预测模型,所述边界预测模型通过以下步骤对所述特征信息进行时序边界预测,以生成所述视频的时序边界:3. The method according to claim 1, wherein the boundary prediction model is a boundary prediction model based on a Transformer mechanism, and the boundary prediction model performs time series boundary prediction on the feature information through the following steps to generate the video The timing boundaries of : 根据所述特征信息生成多个特征向量;generating a plurality of feature vectors according to the feature information; 根据所述多个特征向量生成所述视频的全局信息;generating global information of the video according to the plurality of feature vectors; 根据所述全局信息生成所述视频的时序边界。Timing boundaries of the video are generated based on the global information. 4.根据权利要求3所述的方法,其中,所述多个特征向量包括第一特征向量、第二特征向量和第三特征向量,所述根据所述多个特征向量生成所述视频的全局信息,包括:4. The method of claim 3, wherein the plurality of feature vectors comprises a first feature vector, a second feature vector and a third feature vector, and the global image of the video is generated according to the plurality of feature vectors information, including: 获取第一特征向量的维度;Get the dimension of the first feature vector; 根据第一特征向量、第二特征向量、第三特征向量和所述维度,生成所述视频的全局信息。Global information of the video is generated according to the first feature vector, the second feature vector, the third feature vector and the dimension. 5.根据权利要求1所述的方法,其中,所述边界预测模型通过以下方式生成:5. The method of claim 1, wherein the boundary prediction model is generated by: 获取样本视频,并获取所述样本视频中多个样本视频片段的标签;Obtain a sample video, and obtain the labels of multiple sample video clips in the sample video; 对所述样本视频进行特征提取,以获取所述样本视频的样本特征信息;performing feature extraction on the sample video to obtain sample feature information of the sample video; 将所述样本特征信息输入所述边界预测模型以生成预测的边界分数;inputting the sample feature information into the boundary prediction model to generate a predicted boundary score; 根据所述预测的边界分数和所述标签生成损失值,并根据所述损失值对所述边界预测模型进行训练。A loss value is generated from the predicted boundary score and the label, and the boundary prediction model is trained according to the loss value. 6.一种视频处理装置,包括:6. A video processing device, comprising: 第一获取模块,用于获取视频;The first acquisition module is used to acquire the video; 第二获取模块,用于对所述视频进行特征提取,以获取所述视频的特征信息;A second acquisition module, configured to perform feature extraction on the video to acquire feature information of the video; 第一生成模块,用于调用边界预测模型对所述特征信息进行时序边界预测,以生成所述视频的时序边界;以及a first generation module for invoking a boundary prediction model to perform time-series boundary prediction on the feature information to generate a time-series boundary of the video; and 第二生成模块,用于根据所述时序边界对所述视频进行切分,以生成所述视频的视频片段。A second generating module, configured to segment the video according to the timing boundary to generate video segments of the video. 7.根据权利要求6所述的装置,其中,所述第二获取模块,具体用于:7. The apparatus according to claim 6, wherein the second obtaining module is specifically used for: 获取特征提取模型;Get the feature extraction model; 将所述视频输入至所述特征提取模型;inputting the video to the feature extraction model; 通过所述特征提取模型对所述视频进行特征提取,以获取所述视频的特征信息。Feature extraction is performed on the video through the feature extraction model to obtain feature information of the video. 8.根据权利要求6所述的装置,其中,所述边界预测模型为基于Transformer机制的边界预测模型,所述第一生成模块,包括:8. The device according to claim 6, wherein the boundary prediction model is a boundary prediction model based on a Transformer mechanism, and the first generation module comprises: 第一生成单元,用于根据所述特征信息生成多个特征向量;a first generating unit for generating a plurality of feature vectors according to the feature information; 第二生成单元,用于根据所述多个特征向量生成所述视频的全局信息;a second generating unit, configured to generate global information of the video according to the plurality of feature vectors; 第三生成单元,用于根据所述全局信息生成所述视频的时序边界。A third generating unit, configured to generate a timing boundary of the video according to the global information. 9.根据权利要求8所述的装置,其中,所述多个特征向量包括第一特征向量、第二特征向量和第三特征向量,所述第二生成单元,具体用于:9. The apparatus according to claim 8, wherein the plurality of eigenvectors comprise a first eigenvector, a second eigenvector and a third eigenvector, and the second generating unit is specifically configured to: 获取第一特征向量的维度;Get the dimension of the first feature vector; 根据第一特征向量、第二特征向量、第三特征向量和所述维度,生成所述视频的全局信息。Global information of the video is generated according to the first feature vector, the second feature vector, the third feature vector and the dimension. 10.根据权利要求6所述的装置,还包括:10. The apparatus of claim 6, further comprising: 训练模块,用于通过以下方式生成所述边界预测模型:A training module for generating the boundary prediction model by: 获取样本视频,并获取所述样本视频中多个样本视频片段的标签;Obtain a sample video, and obtain the labels of multiple sample video clips in the sample video; 对所述样本视频进行特征提取,以获取所述样本视频的样本特征信息;performing feature extraction on the sample video to obtain sample feature information of the sample video; 将所述样本特征信息输入所述边界预测模型以生成预测的边界分数;inputting the sample feature information into the boundary prediction model to generate a predicted boundary score; 根据所述预测的边界分数和所述标签生成损失值,并根据所述损失值对所述边界预测模型进行训练。A loss value is generated from the predicted boundary score and the label, and the boundary prediction model is trained according to the loss value. 11.一种电子设备,包括:11. An electronic device comprising: 至少一个处理器;以及at least one processor; and 与所述至少一个处理器通信连接的存储器;其中,a memory communicatively coupled to the at least one processor; wherein, 所述存储器存储有可被所述至少一个处理器执行的指令,所述指令被所述至少一个处理器执行,以使所述至少一个处理器能够执行权利要求1-5中任一项所述的视频处理方法。The memory stores instructions executable by the at least one processor, the instructions being executed by the at least one processor to enable the at least one processor to perform the execution of any of claims 1-5 video processing method. 12.一种存储有计算机指令的非瞬时计算机可读存储介质,其中,所述计算机指令用于使所述计算机执行根据权利要求1-5中任一项所述的视频处理方法。12. A non-transitory computer-readable storage medium storing computer instructions for causing the computer to perform the video processing method of any one of claims 1-5. 13.一种计算机程序产品,包括计算机程序,所述计算机程序在被处理器执行时实现根据权利要求1-5中任一项所述的视频处理方法。13. A computer program product comprising a computer program which, when executed by a processor, implements the video processing method of any one of claims 1-5.
CN202110721821.4A 2021-06-28 2021-06-28 Video processing method and device, electronic equipment and storage medium Pending CN113591570A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110721821.4A CN113591570A (en) 2021-06-28 2021-06-28 Video processing method and device, electronic equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110721821.4A CN113591570A (en) 2021-06-28 2021-06-28 Video processing method and device, electronic equipment and storage medium

Publications (1)

Publication Number Publication Date
CN113591570A true CN113591570A (en) 2021-11-02

Family

ID=78244840

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110721821.4A Pending CN113591570A (en) 2021-06-28 2021-06-28 Video processing method and device, electronic equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113591570A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390365A (en) * 2022-01-04 2022-04-22 京东科技信息技术有限公司 Method and apparatus for generating video information

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110188733A (en) * 2019-06-10 2019-08-30 电子科技大学 Time-series behavior detection method and system based on 3D regional convolutional neural network
CN110263215A (en) * 2019-05-09 2019-09-20 众安信息技术服务有限公司 A video emotion localization method and system
CN110852256A (en) * 2019-11-08 2020-02-28 腾讯科技(深圳)有限公司 Method, device and equipment for generating time sequence action nomination and storage medium
CN112804558A (en) * 2021-04-14 2021-05-14 腾讯科技(深圳)有限公司 Video splitting method, device and equipment

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110263215A (en) * 2019-05-09 2019-09-20 众安信息技术服务有限公司 A video emotion localization method and system
CN110188733A (en) * 2019-06-10 2019-08-30 电子科技大学 Time-series behavior detection method and system based on 3D regional convolutional neural network
CN110852256A (en) * 2019-11-08 2020-02-28 腾讯科技(深圳)有限公司 Method, device and equipment for generating time sequence action nomination and storage medium
CN112804558A (en) * 2021-04-14 2021-05-14 腾讯科技(深圳)有限公司 Video splitting method, device and equipment

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114390365A (en) * 2022-01-04 2022-04-22 京东科技信息技术有限公司 Method and apparatus for generating video information
CN114390365B (en) * 2022-01-04 2024-04-26 京东科技信息技术有限公司 Method and apparatus for generating video information

Similar Documents

Publication Publication Date Title
CN113836333B (en) Training method of image-text matching model, method and device for realizing image-text retrieval
CN113807440B (en) Method, apparatus, and medium for processing multimodal data using neural networks
TWI737006B (en) Cross-modal information retrieval method, device and storage medium
WO2023020005A1 (en) Neural network model training method, image retrieval method, device, and medium
EP3872652B1 (en) Method and apparatus for processing video, electronic device, medium and product
CN113971751A (en) Training feature extraction model, and method and device for detecting similar images
CN112541122A (en) Recommendation model training method and device, electronic equipment and storage medium
CN112784778B (en) Method, apparatus, device and medium for generating model and identifying age and sex
CN112749758B (en) Image processing method, neural network training method, device, equipment and medium
CN114037003B (en) Question answering model training method, device and electronic equipment
CN113947188A (en) Target detection network training method and vehicle detection method
JP2023017910A (en) Semantic representation model pre-training method, device, and electronic apparatus
CN112580666B (en) Image feature extraction method, training method, device, electronic device and medium
US12056184B2 (en) Method and apparatus for generating description information of an image, electronic device, and computer readable storage medium
CN114972910B (en) Training method and device for image-text recognition model, electronic equipment and storage medium
CN114564593A (en) Completion method and device of multi-mode knowledge graph and electronic equipment
CN115114448B (en) Intelligent multi-mode fusion power consumption inspection method, device, system, equipment and medium
EP4123592A2 (en) Human-object interaction detection method, neural network and training method therefor, device, and medium
CN114494784A (en) Training methods, image processing methods and object recognition methods of deep learning models
CN113177449A (en) Face recognition method and device, computer equipment and storage medium
US20230013796A1 (en) Method and apparatus for acquiring pre-trained model, electronic device and storage medium
CN112949433B (en) Method, device and equipment for generating video classification model and storage medium
CN114547252A (en) Text recognition method and device, electronic equipment and medium
CN114078274A (en) Face image detection method, device, electronic device and storage medium
CN112801078A (en) Point of interest (POI) matching method and device, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20211102